Artificial intelligence (AI) has been transforming business processes, technology, and customer experiences for several years. However, recent attention has focused on its ability to produce “creative” materials such as text, images, videos, and conversations. ChatGPT, Dall-E 2, and other AI platforms are gaining popularity with companies and individuals alike and are poised to unlock untapped potential and efficiency when it comes to creativity.
“Generative AI’s impact on productivity could add trillions of dollars in value to the global economy. About 75 percent of the value that generative AI use cases could deliver falls across four areas: Customer operations, marketing and sales, software engineering, and R&D.”
Mckinsey, 2023
Generative AI tools can automate complex tasks, predict outcomes based on data inputs, and create original, insightful content. In addition to streamlining operations, generative AI has the power to offer unparalleled value and growth to your customers.
What is generative AI?
Generative AI is a subset of AI. AI uses algorithms (sets of rules) to perform tasks in a way that mimics human intelligence. Specifically, generative AI takes these algorithms and uses them to create or “generate” something new. More traditional types of AI include predictive AI, which draws upon past data to make inferences about future values, and conversational AI, which powers natural language interactions between humans and tech — think Siri, Alexa, or customer support chat boxes.
Using generative artificial intelligence… intelligently
With all of the potential uses of generative AI, from creating marketing materials and customer communications to driving product design ideation and refining branding concepts, many companies are eager to embrace it. In fact, the generative AI market is predicted to be valued at $110.8 billion by 2030. According to Gartner, 30% of all outbound messages from enterprise companies will be generated by AI by the year 2025.
But like any new tool or technology, generative AI can produce “shiny object syndrome,” or the desire to immediately implement the latest or trendiest product, tech, or process, typically at the expense of current strategy or business priorities. Shifting gears suddenly without a plan or goal in place is usually a recipe for disaster, potentially leading to wasted resources, frustrated employees, and even angry customers.
That’s why generative AI, as exciting and revolutionary as it is, should be implemented with caution and forethought. And that means following the 4Gs of AI.
The 4Gs of AI
Before jumping on the generative AI bandwagon, teams need to think through some key questions. What rules do we need to follow when using AI-generated materials? How will AI help us reach our business goals? When, where, and how will we use AI? How will we track the performance of AI-related initiatives?
These questions can be summarized by the 4Gs of AI:
- Governance: understanding, following, and enforcing regulatory compliance when it comes to generative AI
- Guidance: aligning AI with business goals
- Guardrails: setting boundaries around AI use to minimize risk and optimize outcomes
- Grounding: solidifying AI outputs in real-world data to ensure accuracy and relevance
Let’s go through each in more detail.
1. Governance
In their haste to take advantage of the benefits of generative AI, organizations may adopt informal, ad hoc approaches to implementation without a proper governance structure, leading to inconsistencies, non-compliance, and a lack of accountability.
Major risks of poor governance include regulatory fines, reputational damage, and potential legal liabilities. In addition, a lack of transparency can create distrust among stakeholders and impede AI adoption in other areas.
To avoid these problems, we recommend developing a comprehensive Infosec AI governance framework that includes policies, standards, and restrictions. Establish clear roles and responsibilities for AI decision-making within the organization, and create processes that ensure compliance with applicable laws, regulations, and industry standards.
Since AI is new, relevant rules and regulations (and possible risks) are constantly evolving. As a result, you should regularly review and update your governance policies and procedures.
2. Guidance
Without clear guidance, organizations may pursue AI projects that do not align with business objectives or may inefficiently allocate resources. Misaligned AI initiatives can result in wasted money, opportunity costs, and failure to realize the full potential of AI investments.
Before implementing any generative AI tools into your business activities, develop an AI strategy articulating the organization’s goals and objectives for AI adoption. This should include a clear roadmap for AI implementation, complete with timelines, milestones, and key performance indicators.
There are a number of AI platforms on the market, so evaluate these different models and ensure the use of secure, effective, performant, and efficient alternatives if needed. And when you do determine that AI-related projects make business sense, prioritize those projects by potential impact, feasibility, and alignment with the organization’s strategy.
Once again, don’t forget that AI is new! Architect for flexibility as models learn and features may change rapidly and suddenly.
3. Guardrails
Without ethical guardrails, companies may adopt AI applications with biased or unfair outcomes, which can harm individuals or groups and undermine trust in AI systems. In addition, ethical lapses in AI systems can lead to public backlash, brand and reputation damage, and potential legal liabilities.
That’s why we suggest developing ethical principles and guidelines to govern AI application design, development, and deployment. Make sure all employees understand the issues at stake and foster a culture of ethical AI use within the organization through training, awareness campaigns, and ongoing dialogue. If ethical concerns do arise, have processes in place to identify and address them.
In addition, conduct regular audits of AI systems to ensure compliance with ethical guidelines and uncover potential biases or unfair outcomes. Protect business outcomes with system parameters, moderation APIs, and code guardrails to avoid misuse, abuse, prompt hacking, and prompt injections.
4. Grounding
AI applications not grounded in real-world data can produce inaccurate, irrelevant, or misleading results, limiting their effectiveness and utility. Common issues include generative AI errors and “hallucinations” or nonsensical outputs. These negative outcomes can lead to poor decision-making, reduced trust in AI systems, and missed opportunities for AI-driven innovation.
Effective grounding starts at, well, the ground: your data. Invest in collecting, curating, and maintaining high-quality, diverse, and representative data sets, and develop data management and governance practices to ensure data integrity, security, and privacy.
To reduce the risk of errors, plan for ongoing evaluation of prompt completions, few-shot prompts, embedded data, and augmented retrieval. Programmatically validate AI models using real-world data and ensure their performance meets the organization’s requirements, promptly investigating false positives, false negatives, and other misclassifications to uncover potential biases, data quality issues, or shortcomings in the model architecture.
Nintex + AI
Our low-code SaaS platform, enhanced with generative AI capabilities, can act as a dynamic growth accelerator for your organization. Data connectors in Nintex support common approaches like SQL, REST, and OData, making human-centered AI business apps attainable and quick to deploy.
Key features like automated content creation, predictive analytics, and advanced automation can help customers in areas such as screening and sourcing, extracting insights from data analysis, enhancing portfolio management, and improving market intelligence monitoring.
To see Nintex + AI in action, request a demo.