Early in my career, I worked at Motorola, a company renowned for its commitment to product quality. I recall many corporate training modules where Motorola’s Six Sigma quality program impressed upon me that to increase a product's value to the end user, you must work to increase its reliability.
Simply put, customers want products that work, and they become frustrated when your product does not meet their expectations.
This impression lasts with me to this day. I find myself interested in how I can increase product value and customer satisfaction by improving the quality of deliverables. Recently, I have been searching for ways to enhance the quality of content in generative AI applications, acknowledging that they cannot escape the basic premise: customers value a reliable and performing product.
The Legacy of Six Sigma at Motorola
Motorola leaders: Bob Galvin (l) and Bill Smith (r) |
Motorola’s quality commitment traces back to then-CEO Bob Galvin and the Six Sigma program developed by engineer Bill Smith. Quality is a key differentiator: when suppliers deliver on this front, they secure the initial sale and pave the way for repeat and add-on sales by proving themselves as dependable partners.
Six Sigma enabled Motorola to boost revenue and market share in key business sectors it has entered in the past and present: public safety, paging, cellular networks, and satellite technologies. All of these verticals have high expectations for quality and reliability.
Scope, Quality and the Triple Constraint
The Project Management Triple Constraint |
Project management professionals know the triple constraint: scope, cost and schedule. While often considered an implicit part of scope, quality is often overlooked to achieve faster delivery and lower costs.
In hypercompetitive markets where gaining first-mover advantage is very important, it is understandable that this tradeoff is necessary to gain the initial sale. That being said, after that sale is made, it is crucial not to neglect quality. Customers may initially tolerate a less robust product, but long-term success with strategic customers depends on consistency and reliability.
The Challenges of Generative AI Applications
Robot Factory Worker - Source: DALL-E 3 |
The inherent design of generative AI applications and the neural networks they rely upon present unique challenges, especially when customer requirements set high expectations. The probabilistic and stochastic nature of large language models (LLMs) and neural networks means they inherently produce outputs with some variability and unpredictability.
Probabilistic Nature of LLMs
Generative AI, like LLMs, operates on patterns learned from vast datasets. This means the output is a product of probabilities rather than deterministic rules. While this allows for remarkable creativity and flexibility, it also introduces the risk of "hallucinations" – factually incorrect or nonsensical outputs.
Robotic Hallucinations - Source: DALL-E 3 |
Techniques to Mitigate Risks
Use larger models, trained on more data
The use of larger models trained on more extensive and diverse datasets can significantly reduce the probability of erroneous outputs. Larger models benefit from a wealth of training data, making them less likely to produce outputs that deviate from learned patterns and facts. They excel at understanding nuances and complex relationships, allowing them to distinguish between subtle differences and avoid making incorrect associations. This enhanced understanding helps in generating more accurate and reliable outputs, thus improving overall model performance.
Use prompt engineering
Prompt engineering involves carefully designing the inputs to the AI model to guide its outputs more effectively. By refining prompts and using contextually rich inputs, the likelihood of generating accurate and relevant responses increases. I have explored and experimented with this specific technique to enhance the predictability and reliability of generative AI applications I created, and I wrote about my experience in this blog post.
Deploy AI agents
The emergence of AI agents represents a major leap in improving the
quality and reliability of generative AI applications. These agents
autonomously perform tasks, monitoring and validating AI model outputs
to create effective feedback
loops. For example, AI applications can
fact-check each other, adding scrutiny and correction before delivering
the final output. This process enhances accuracy and reduces errors, as
seen in natural language processing where AI agents refine generated
text to meet quality standards. However, AI agents require precise
programming and can introduce biases in feedback mechanisms, presenting
significant challenges.
Limitations of These Techniques
It is important to note that while these techniques reduce hallucination risks, they cannot eliminate them due to the models' probabilistic nature.
The Importance of Human Intervention
Mission Critical Robot Firefighters - Source: DALL-E 3 |
In mission-critical applications, where consistent product and service reliability is paramount, relying solely on generative AI without human oversight is risky. Human intervention is essential to verify and validate AI-generated content, ensuring it meets the required standards of accuracy and reliability. Your product’s and company’s reputation are at stake.
Generative AI: Innovation, and now regulation
The European Union, on March 13th 2024, formally passed the AI Act, which sets comprehensive rules for the development and use of AI systems, especially focusing on high-risk applications. The AI Act is designed to regulate AI applications based on their potential risks to society, with the highest scrutiny applied to high-risk applications such as those in healthcare, law enforcement, and critical infrastructure.
In addition to the AI Act, the EU has ensured that this new regulation works in harmony with existing laws like the GDPR. The AI Act incorporates stringent requirements for transparency, accountability, and human oversight, particularly for high-risk AI systems.
Conclusion
High software quality is crucial for enhancing product and service value. Companies like Motorola have demonstrated that consistent, reliable products boost customer satisfaction, loyalty, and revenue. Generative AI, however, presents new challenges in maintaining quality. There are techniques can reduce quality risks, but the inherent variability of AI models requires human oversight, particularly in mission-critical applications. Balancing automation with human intervention is essential for maintaining high-quality standards as AI integration continues.