Building a culture of innovation around Generative AI & LLMs
Generative AI (GenAI), and Large Language Models (LLMs) in particular, took the world by storm in 2023.
ChatGPT went from 0 to 100M users in just two months. What is even more impressive is the blistering speed at which GenAI technologies themselves have developed—advancing from toy models to levels that compete with human intelligence in mere months.
While these developments have been exciting for those of us with a strong appetite for innovation, they have been extremely challenging for large organizations that have had to develop policies around how these technologies should be used. Despite what the immediate policies might be (with many companies taking the safe route and forbidding the use of tools such as ChatGPT), productivity and quality gains are clear among those who have adopted them. In one study from Harvard, BCG consultants who used ChatGPT were found to have a 40% performance boost and a 25% speed boost in their work.1
Statistics like these, which are bound to become more impressive as the tools improve and the user experience becomes more streamlined, mean that companies looking to remain competitive are unlikely to remain on the "safe" side for very long. The lure of cutting-edge technology, coupled with tangible benefits, makes it inevitable that more organizations will start to explore if not fully embrace, the use of Generative AI and Large Language Models. This shift towards adoption, however, requires a delicate balance. On one hand, it offers the promise of unprecedented efficiency and innovation; on the other, it poses ethical, security, and societal challenges that cannot be ignored.
The journey towards a culture of innovation around Generative AI and LLMs in large organizations involves more than just deploying new technologies. It requires a holistic approach that encompasses policy development, ethical considerations, and culture change.
While companies might approach the first two aspects using their rulebooks, culture change relies on a deep understanding of organizational psychology. This is where insights from the behavioral science of innovation might prove helpful.
Behavioral Science, Democratized
We make 35,000 decisions each day, often in environments that aren’t conducive to making sound choices.
At TDL, we work with organizations in the public and private sectors—from new startups, to governments, to established players like the Gates Foundation—to debias decision-making and create better outcomes for everyone.
How do you build an AI innovation-hungry culture?
Crafting a culture that not only adapts to but thrives on the advancements of Large Language Models (LLMs) and Generative AI means having a nuanced understanding of what sets these technologies apart—i.e. leveraging their unique capabilities while navigating their specific challenges. Here are some strategies that sit at the intersection of behavioral science and the uniqueness of Gen AI:
1. Specialized Innovation Labs for LLM Experimentation
Establish innovation labs focused on LLM applications. These labs can serve as incubators for projects ranging from natural language understanding and generation to complex decision-making systems. By concentrating on LLMs, these labs can dive deep into the nuances of language model training, fine-tuning techniques, and exploring novel applications in fields like legal tech, healthcare, and customer service, where natural language processing can revolutionize traditional operations.
2. "LLM Champions" Program
Launch an "LLM Champions" program to identify and support individuals across the organization who are particularly enthusiastic about or skilled in using LLM technologies. These champions can receive advanced training, access to cutting-edge research, and opportunities to attend industry conferences. They can serve as mentors and advocates within the organization, helping to spread knowledge about LLM capabilities and best practices, while inspiring others to explore potential applications in their work.
3. Honest Showcases
Regularly showcase the good, bad, and ugly in real-world implementations of LLM technologies within the organization. Highlight tangible benefits, such as improved efficiency in document analysis, enhanced creativity in marketing content creation, or breakthroughs in customer service chatbots. This can illustrate the value of LLMs beyond the hype. Showcases can also serve as learning opportunities, demonstrating the practical steps taken from concept to deployment and the lessons learned along the way.
And, of course, none of this works without a general strategy focused on building an innovation culture. These are a little less exciting, but crucial components:
1. Leadership Commitment and Vision
The journey begins at the top. Leadership mustn’t only express verbal commitment to embracing GenAI and LLMs but also demonstrate dedication through action. This includes allocating resources for AI projects, setting strategic priorities around AI innovation, and embodying a culture of lifelong learning. Leaders should articulate a clear vision of how AI can enhance the organization's mission and values, setting the stage for a culture that views AI as an integral part of its future.
2. Encourage a Learning Mindset
Innovation thrives in environments where learning is continuous and encouraged. Organizations should invest in comprehensive training programs that cover both the technical aspects of GenAI and LLMs and the ethical considerations involved in their adoption. Creating internal platforms or forums where employees can share their experiences, challenges, and solutions related to AI projects can also foster a community of learning and mutual support.
3. Promote Cross-disciplinary Collaboration
AI innovation is not just the domain of data scientists and technologists; it requires input from across the organization. Encouraging teams from different departments to collaborate on AI projects can lead to innovative applications that might not have been explored otherwise. This cross-pollination of ideas ensures that AI solutions are grounded in real-world needs and benefit from diverse perspectives.
4. Ethical Frameworks and Transparency
Developing and implementing ethical guidelines for AI use is crucial. These guidelines should be transparent, accessible to all employees, and include principles on data privacy, fairness, accountability, and the mitigation of biases. Engaging employees in discussions about the ethical implications of AI projects can help raise awareness and ensure that ethical considerations are integrated into the innovation process.
5. Celebrate Experimentation and Tolerate Failure
Cultivating an innovation-hungry culture means creating an environment where experimentation is celebrated, and failure is seen as a learning opportunity. Encouraging teams to take calculated risks without fear of retribution should an initiative not yield the desired outcome is essential. Recognizing and rewarding new ideas and efforts, even when they don't fully succeed, can keep motivation high and reinforce the value placed on innovation.
What does this mean for you?
It's clear that the future of competitive advantage lies not just in the adoption of new technologies like Generative AI and LLMs, but in the ability of organizations to create a culture that embraces change, values learning, and is unafraid to push boundaries. This culture will be characterized by a relentless pursuit of improvement, a deep commitment to ethical considerations, and an inclusive approach that leverages the diverse skills and perspectives of all employees.
Is that something you’re working on building in your organization?
We'd love to hear from you - send us a message (https://thedecisionlab.com/contact) and our team will reach out to you or send some relevant resources your way.
References
- Marshall, M. (2023). Enterprise workers gain 40 percent performance boost from GPT-4, Harvard study finds. VentureBeat. https://venturebeat.com/ai/enterprise-workers-gain-40-percent-performance-boost-from-gpt-4-harvard-study-finds/
About the Author
Sekoul Krastev
Sekoul is a Co-Founder and Managing Director at The Decision Lab. A decision scientist with an MSc in Decision Neuroscience from McGill University, Sekoul’s work has been featured in peer-reviewed journals and has been presented at conferences around the world. Sekoul previously advised management on innovation and engagement strategy at The Boston Consulting Group as well as on online media strategy at Google. He has a deep interest in the applications of behavioral science to new technology and has published on these topics in places such as the Huffington Post and Strategy & Business.