The story of innovation has always been a story of tension. New technologies arrive with the promise to expand human capability and almost inevitably, they trigger fear. In the 1960s, the idea of a National Data Center in the US, when there were only 15,500 computers in the country, was seen as a step toward Orwellian surveillance. Early cars were derided as dangerous indulgences that no sensible person would want to travel in. Even the novel, in the 18th century, was condemned for “reading addiction” and moral decay.
Artificial Intelligence is no different. Much of today’s public conversation is framed around “What could go wrong?” Reid Hoffman and Greg Beato’s Superagency urges us to flip the question: “What could go right?”. This is not as an excuse to ignore risks, but a framework for unlocking AI’s potential while managing its challenges.
One of Superagency’s most important ideas is iterative deployment: release new technology in stages, so that society adapts alongside it. This is how cars went from having no seatbelts to today’s complex safety systems. Widespread use revealed real risks, which in turn drove safety innovation—airbags, traffic laws, crash-test standards.
AI will benefit from the same cycle. Early exposure allows users, companies, and policymakers to understand real-world capabilities and limitations. This builds social norms and practical regulation grounded in lived experience, not speculation. The alternative—regulating AI before we truly understand it—is like drafting motorway rules based on horse-drawn carriage speeds.
For Oleka, this principle matters because AI is not a single “release.” It will evolve over decades. The revolution will not be complete in one leap, but in hundreds of steps, each unlocking new capabilities and creating new business models. That timeline creates opportunity for those who can see past today’s limits.
Another key Superagency insight is that safety and innovation are symbiotic. Mass adoption of cars revealed the need for brakes, indicators, and child seats—features unthinkable without cars themselves. The same will happen with AI: the path to safer AI runs through building and deploying it, not freezing it in the lab.
This is not a case for recklessness. It’s a case for building safety into the growth curve. The market will reward systems that are reliable, trusted, and aligned with human needs. Those incentives will spur safety innovation just as much as capability innovation.
Because AI’s capabilities will compound over years, there is an inevitable mismatch between what most entrepreneurs think is possible today and what will be possible tomorrow. We’ve seen this in every technology cycle: early internet businesses often underestimated the impact of broadband, smartphones, or app stores until they were fully in place.
For investors, this mismatch is an advantage—if you know how to read it. At Oleka, our approach to AI investing is built around two complementary plays:
Moonshots with a pathway – Ambitious ideas that may seem distant today but are plausible within the arc of AI’s next decade. These bets require clear technical roadmaps and staged milestones, not blind optimism.
Application-layer companies that work now – AI-native products that are already delivering value in markets with proven demand. This ranges from AI-native market research platforms that can generate and validate insights in hours, to AI-powered marketing applications that dynamically personalise campaigns at scale.
This dual strategy allows us to capture value at both ends of the timeline: immediate adoption and long-term transformation.
There is a global race for Artificial General Intelligence—systems that surpass humans at all cognitive tasks—and it is likely that the US or China will get there first. But for Türkiye, the bigger prize is not in winning that frontier sprint. It is in winning the other race: the rapid, broad deployment of AI across our economy.
AI is a general-purpose technology, like electricity or the internet. History shows that the nations which gain most from such technologies are not always the ones that invent them, but the ones that adopt them fastest and most widely. America’s manufacturing dominance in the late 19th century came not from having the most groundbreaking inventions, but from being the quickest to integrate them into production, backed by a workforce trained to use them.
For Türkiye, this means building the diffusion infrastructure—training business leaders, especially in SMEs, to redesign operations around AI, not simply bolt it onto the IT department. The productivity benefits of electricity only emerged when factories reorganised around the electric dynamo; the same will be true of AI. If its use remains siloed, only a fraction of its value will be realised.
The challenge is urgent. Most of the workforce that will be active in 2030 is already in jobs today. That requires large-scale, credible, and accessible AI training for people in work now—not just future graduates. It also means shifting public perception from fear to opportunity, by demonstrating real improvements in daily life: faster government services, easier access to finance, personalised education, streamlined healthcare.
Governments have a role to play too. With the state accounting for over 40% of OECD economies, public-sector AI adoption is essential for economy-wide transformation. That demands regulation that protects against genuine frontier risks without choking off mainstream business use. Over-regulation at this stage would be like regulating modern motorways before the car was even invented.
Türkiye’s competitive advantage lies in speed—deploying AI into its economy more quickly than regional peers, creating a productivity uplift that compounds year after year. This is where the opportunity sits for both public and private investment, and where Oleka focuses: backing AI application-layer companies that deliver results now, while positioning for the capabilities that will emerge tomorrow.
At Oleka, we see AI as both the most powerful general-purpose technology of our era and the most misunderstood. The challenge is not to decide whether to build—it’s to decide how to build so that the benefits scale faster than the risks.
We believe the next decade will reward those who combine vision with iteration, ambition with adaptability. The AI revolution is not a sprint to a single finish line; it’s a series of compounding steps. Our job is to identify the teams capable of taking those steps—whether they’re charting a path to a moonshot or delivering AI products that work today—and to help them navigate the road ahead.
The question is not just “What could go wrong?” The better question is: “What could go right—and how do we make sure it does?”