ARTIFICIAL INTELLIGENCE, PROMISE OR PERIL: PART 3 – AI GOVERNANCE AND VENTURE CAPITAL

Dr. James Baty, Supreet Manchanda and Paul Dugsin • Dec 15, 2023

by Dr. James Baty PhD, Operating Partner & EIR and and Supreet Manchanda & Paul Dugsin, Founding Partners


RAIVEN CAPITAL


This is our third release in our series on governing
Artificial Intelligence – 
AI Promise or Peril 


This release consists of three sections 


AI Impact on Business & Markets

AI Impact on VC Operations

Impact of AI Governance in VC


The first release on AI Ethics is available here.

The second release on AI Regulation is available here.


In our inaugural episode of our ‘AI, Promise or Peril’ series, we delved into the clamor surrounding Artificial Intelligence (AI) Ethics—a field as polarizing as it is fascinating. Remember the Future of Life Institute’s six-month moratorium plea, backed by AI luminaries? Opinions ranged from apocalyptic warnings to messianic proclamations to cries of sheer hype.


In our second episode, we examined the chaos around the emerging AI Regulation, a cacophony of city, state, national, and international regulatory panels, pronouncements, and significant legislative and commission enactments. We examined the EU AI Act, and the US NIST AI Risk Framework amongst key models. We suggested there was a strong case for a US Executive Order on AI based on the NIST AI-RMF. Keep in mind that the US Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence was issued by President Biden on October 30th 2023. In addition, the United Nations announced in October the creation of a 39-member advisory body to address issues in the international governance of artificial intelligence.


Our blog posts observed that the emerging solution to AI governance is not one act or law; it encompasses corporate self-governance, industry standards, market forces, national and international regulations, and especially AI systems regulating other AI systems. AI governance impacts not just those developing AI projects, but also those leveraging existing AI tools. Navigating this terrain involves a complex interplay of legal regulations and voluntary ethical standards. Whether you’re at the helm of an AI project or leveraging AI tools developed by others, a complex web of ethical and regulatory issues awaits.


RAIVEN’s “VC” PERSPECTIVE 

Before we begin, let’s mention Raiven Capital’s unique viewpoint:

We’re not in the business of chasing far-fetched dreams. Our game is more grounded – Raiven steps in when seed phase is over and MVP is ready, with a keen eye on substantial returns within a five-year window. Where building bridges to capital, knowledge and markets is key to business success.


Our arena? It’s at the forefront of the fifth industrial revolution. Imagine scenarios where technology is not just an aid but an integral part of the human work environment, tackling the real issues plaguing our era and brewing next-gen operational solutions. In this paper we address both the larger issues of the industry in general, but also focus on what’s primary in our strategic focus.


AI IMPACT ON BUSINESS AND MARKETS

a computer generated image of a brain and the google logo .

Before we go into specific ways AI governance impacts venture capital, let’s consider the broader implications of AI on the business world, particularly in the tech sector. Artificial intelligence isn’t just a buzzword or a trendy add-on in the tech industry, it is a revolutionary force. 


Think about it: AI implementations are not just improving business processes; they’re creating entirely new business frameworks. The old ways of doing business? They’re being disrupted and turned on their heads. We’re witnessing a fundamental, potentially existential shift in how businesses operate and compete. For tech businesses, AI is not only a new frontier. It’s the difference between staying relevant or becoming obsolete. It’s not just about being smarter or more efficient. It’s about reimagining what’s possible, reinventing business models, and revolutionizing industries. In the AI era, adaptability, innovation, and foresight are the new currencies.



An Example: Is This the End of Internet Search?


To understand this, let’s consider a specific use-case that is dominating headlines: Is OpenAI / ChatGPT blowing up internet search business models? Google has long reigned supreme, owning 92% of the Internet search market (with Microsoft a distant second at 2.5%). This dominance extends to internet search advertising revenue, where Google commands an impressive 86%, outperforming all other companies in advertising earnings.


Of course, Google has long pioneered research into AI’s potential to enhance this market, but the company has approached its integration with caution. This conservative stance left a gap in the market – a window of opportunity to radically shake up the market with the introduction of ChatGPT. In parallel was the investment by Microsoft into OpenAI and Marc Andreessen opining on the Lex Fridman podcast as to whether this is the end of Internet Search as we know it.


The overall market threat is twofold:


First, a paradigm shift to the current advertising model: If internet search evolves from the traditional listing pages of ten websites, interspersed with advertising, to having the chat interface of the service simply provide ‘the answer,’ then the opportunity to place advertising is notably reduced. 


Secondly, disruption of the website business model: If search just returns an answer and no website listing, the larger implication is profound. The conventional website business model, which has been the mainstay for over 25 years, faces significant disruption. The traditional storefront and public relations image, represented by a website, and its value as an internet face for all businesses could be radically diminished or rendered obsolete.


However, it’s essential to consider the concurrent emerging advancements in user experience architecture. Taking the example of the Bing/Edge interface developed by Microsoft, which integrates AI tools within a broader web window design, we observe a strategic incorporation of generative chat functionalities. This approach does not merely deliver answers, it also connects users to relevant articles and webpages that inform those answers, simultaneously creating new opportunities for advertising placement. Such an enriched environment, where generative AI and conversational prompting emerge as key components of the interface, is set to redefine our interaction with internet resources.


And the players? Where Microsoft was facing a continuously declining share of Internet search traffic and advertising revenue, it now has become a player in defining the next internet interface with its investment in OpenAI. Clearly, this is not the end for Google. It has already released its Bard interface using its LaMDA LLM generative AI technology. And it still has volumes more user data and subscribers than its closest competition, but this may be the moment that classical ‘internet search’ and its integrated advertising model jumped the shark’. 


In summary, while the fundamental primitives of search, advertising, and the Web will persist, their operational dynamics are poised for significant change. Emerging trends suggest a transition from reliance on user-generated keywords to more sophisticated AI-driven guidance, enabling a more anticipatory and personalized user experience. This marks a significant evolution in the top-level user interface of the application stack, heralding a new era in digital interaction and information retrieval. Personalization and AI agents will be the new top UI level of the application stack.


Insights for Investors / VCs: AI will notably disrupt search, advertising and web presence.

There are loads of ways in which AI is impacting business, and it would be a dissertation to begin to examine all of them here, so let’s leave this one example as to why understanding the business impacts of AI is so important to venture capital.


AI is creating huge opportunities, AI businesses, and businesses leveraging AI, that impact competitive market structure, and profitability. The use of AI amplifying human capital will be a key component of transforming entire industries in the fifth industrial revolution. This idea is as big as any of the previous industrial revolutions — the contribution of artificial intelligence, and how it will reshape businesses, moving them to unprecedented levels of autonomy by integrating decision making into processes leading to higher potential profitability. While paradoxically it squeezes profit by the result of the competitive use of the technology, AngelList has reported midyear that “AI deal share has increased more than 200% on AngelList in the past year, even while the venture market is down 80% in 2023”. 


AI is hot!! It is critical to understand the transformative issues of AI and other IT technology on the business landscape and the economy, especially when evaluating and judging and investing in VC opportunities. Clearly AI is driving a wave of new startups.


Insights for Investors / VCs: Look for even more disruptive opportunities.

AI IMPACT ON VC OPERATIONS

Image of C3PO, MRI machine with patient, and doctor

AI Fintech is Improving Basic VC Operations


How will artificial intelligence impact the venture capital industry? It already has. Virtually every tool, resource, support component and activity in venture capital already has AI integrated into it. AI’s integration is far from superficial, it is foundational. Fundamental tools and resources pivotal to VC operations, like Pitchbook and Carta are rapidly expanding their utilization of AI technologies. This isn’t just a trend riding the wave of ChatGPT’s media frenzy, it’s an ongoing strategic evolution. 


Venture capital fintech firms have been embedding AI into their frameworks for years, subtly yet significantly altering the industry’s anatomy. It’s not about replacing human decision-making but augmenting it with data-driven insights. This adoption extends beyond basic functionalities, marking a paradigm shift in how VC firms operate. 


To understand the depth of AI’s impact on the VC landscape let’s consider the key sectors where AI is transforming VC, and examine specific instances and the aspects of the VC process they influence.


At the top level, we can categorize the significant impact of AI on the VC process into three core areas:

   

Enhancing the deal sourcing and due diligence process: AI can help VC firms find and evaluate potential investments by analyzing large amounts of data, such as market trends, company performance, social media sentiment, and customer feedback. AI can also help VC firms reduce the risk of bias and human error in their decision making. 


Providing personalized and data-driven advice to portfolio companies: AI can help VC firms provide better support and guidance to their portfolio companies by leveraging data and analytics to generate insights and recommendations. AI can also help VC firms monitor the performance and health of their portfolio companies and identify potential issues or opportunities. Improving the efficiency and transparency of the VC industry: AI can help VC firms improve their internal operations and processes by automating tasks, such as reporting, accounting, and compliance. AI will help VC firms increase their transparency and accountability by providing clear and consistent communication and feedback to their stakeholders, such as investors, founders, and regulators at scale.



Specific Fintech Applications of AI in VC 


To understand how broad and deep this revolution is, let’s list a few significant specific ways in which AI is being used in the improvement of venture capital processes and operations.


Deal sourcing: AI can be used to identify promising startups and entrepreneurs before they become widely known, giving venture capitalists an early advantage. This can lead to more lucrative investment opportunities.


  • AngelList uses AI to identify promising startups and entrepreneurs before they become widely known.


  • Crunchbase provides a database of startups and entrepreneurs that can be searched using AI to identify promising investment opportunities.


Due diligence and risk assessment: AI algorithms can analyze large amounts of data, including financial statements, market trends, and social media sentiment, to identify potential risks and opportunities in investment prospects. This can help venture capitalists make more informed decisions and avoid costly mistakes.


  • SignalFire uses AI to analyze company data, news articles, and social media to identify potential risks and opportunities for venture capitalists.


  • Databricks provides a cloud-based platform that allows venture capitalists to analyze large amounts of data to identify trends and patterns that may indicate a promising investment opportunity.


Predictive analytics: AI can be used to predict the future performance of companies, helping venture capitalists identify promising investment opportunities early on. This gives them a competitive edge and increase their chances of success.


  • AlphaSense uses natural language processing (NLP) to analyze company filings, news articles, and social media to predict a company’s future performance.


  • CB Insights uses machine learning to identify companies that are likely to be acquired or go public.


Regulatory compliance: AI can help venture capital firms comply with complex regulatory requirements by automating tasks such as KYC/AML checks and reporting. This can save time and money and reduce the risk of fines or penalties.


  • RegTech Solutions uses AI to automate KYC/AML checks and reporting for venture capital firms.


  • ComplyAdvantage provides a cloud-based platform that helps venture capital firms comply with complex regulatory requirements.


Data analytics and visualization: AI can help venture capitalists make better decisions by providing them with insights from large datasets. This can include things like market trends, competitor analysis, and customer behavior.


  • Tableau provides a platform that allows venture capitalists to visualize and analyze large datasets.


  • Qlik provides a platform that allows venture capitalists to create interactive dashboards that can be used to track market trends and competitor analysis.


Fraud detection and prevention: AI can be used to detect fraudulent activities in real time, protecting venture capital firms from financial losses.


  • Recosense provides various AI fraud detection solutions such as the ability to detect fake financial statements.


Portfolio management: AI can help venture capitalists optimize their portfolios by identifying overvalued or undervalued assets and suggesting rebalancing strategies. This can help them maximize returns and minimize risk.


Chatbots and virtual assistants: AI-powered chatbots and virtual assistants will provide 24/7 customer support and answer frequently asked questions, freeing up humans to focus on more strategic tasks.


OK, you get it, AI is everywhere. Above are just a few examples of how AI is being used to improve venture capital, but it’s clear that AI is becoming pervasive in the platforms servicing the Fintech / VC industry. Much of this is from traditional symbolic AI (structured data / rule-based systems / expert systems), but generative AI (unstructured data / text generation / machine learning) is also fast emerging in VC tools.

Insights for Investors / VCs: Look for even more disruptive opportunities.

Some Emerging Specific Generative AI uses at Raiven Capital.


While much of the existing current AI applied to fintech and the VC industry is classical symbolic AI, there is of course a great deal of recent interest in generative AI. At Raiven, for example, here’s three different use cases of how we are experimenting with generative AI:


1. Document editing

Using (private / non-saved) generative chat products (ChatGPT, Bard, Edge, etc) to help edit text and documents. This is primarily in two areas:

  • Selective editing of text for consistent smoothing of content tone (not to generate content):


This involves providing the LLM examples of the persona desired and related prompt engineering. It also generally involves integrating multiple passes of the review. 

  • Generation of graphics to illustrate key points.


2. Doing founder background / market analysis / research 

While key founder background can be found from general internet resources and especially from many of the tools mentioned above in the VC fintech review, it is also proving useful to use prompt engineering to provide more focused founder background / market analysis / research. 


3. Coachability scenario development

We are considered active investors, connecting our founders with a diverse set of connections and resources, bridging them to markets and providing ‘coaching’ to help their success. As part of our process, we filter for founder’s desire to learn (coachability). So, a key component of our investment strategy emphasizes the following: 

  • Selecting founders for coachability.
  • Providing more intimate hands-on guidance
  • Avoiding common ‘anti pattern’ behaviors that are known failure generators.


We are experimenting with generative AI to quickly create rich scenarios illustrating recurring entrepreneurial patterns, or anti-patterns, to enrich the ‘coaching conversation’.In summary we believe in high touch investing, and we believe that AI willimprove the ability to deliver and scale that value to our founders.

Insights for Investors / VCs: Like all 5IR, human + machine is offering improved VC ops. Including both symbolic and generative AI.

Will AI Kill Venture Capital?


Beyond efficiencies, many are suggesting that AI also has the potential to radically change the overall business of VC. Sam Altman has talked the end of the ‘venture capital industrial complex’. The idea is that major funds have grown too large, dominated by increasing ticket size, with too much focus on exits – manifesting a need to return to more innovation. A more extreme suggestion is that AI could replace the human VC function. Chamath Palihapitiya, CEO of Social Capital, and the face of the SPAC boom, proposed that there’s a reasonable case that the job of venture capitalist will cease to exist


The key elements of AI disruption that Palihapitiya and Altman allude to include:

  • overall move to the ‘creator’ economy 
  • based on smaller AI-centric startups (two founders plus some AI tools)
  • funded by DeFi mechanisms. 


Certainly these ‘forces’ exist. But to understand the current state of this evolution, let’s look at the recent impact of AI in basic investment structure. For now, it seems the new AI-controlled investment model seems long to takeoff:


The VC market isn’t standing still but is seems more to be evolving to address the challenges, rather than disappearing overnight. A near-term AI-enabled shift may be increasingly focused on something in between, not the unchallenged domination of the traditional VC behemoths, but not yet replaced by DeFi financed two-person startups. 



Perhaps there is an emerging middle ground: 

  • Lots of agile startups leveraging new tech
  • Germinated in emerging zones of innovation (not limited to Silicon Valley)
  • Smaller creator startups benefiting from more intimate hands-on VC guidance (what we emphasize at Raiven as ‘Bridges Plus Coaching’).




In Summary, there’s more of everything: Disruption, AI Startups, and Evolutionary VC Services.


As suggested in the first section, AI’s effect on business leads to dramatic productivity gains and will move toward millions of startups made up of teams of one or two. As noted before, pervasive AI tooling could significantly evolve the VC function. But in our opinion, this isn’t the end of the ‘VC industry.’ 


As AI technology continues to develop, we expect to see even more innovative ways in which AI is used to improve venture capital. This will help VCs make more informed decisions, increase their chances of success, and ultimately drive innovation in the fintech industry. Not an end to the VC industry, but a reinvigoration of the startup model. Note: Contrary to their predictions of failure and impending doom, Palihapitiya’s SPACs haven’t replaced IPOs and Altman is still starting new VC funds.


Insights for Investors / VCs:  Look for radical opportunities to improve scale, analytics, and involvement.

IMPACT OF AI GOVERNANCE ON VC

a police officer is writing on a clipboard next to a robot

So, at a strategic level, what does this proliferation of AI and AI regulation mean for the business strategy and operations of the VC community? Be vigilant.


AI has become pervasive and ubiquitous. Artificial intelligence is being integrated into nearly every product or service, ranging from life-saving medical devices to self-checkout cash registers. In this landscape, it is crucial for VC investors to discern between valuable opportunities, less promising ones and pure hype.


AI safety has emerged as a critical consideration. Numerous well-established frameworks for AI ethics and principles exist. Companies heavily involved in AI technology should have their own set of AI principles, like industry leaders such as Google, IBM, Microsoft, and BMW.AI regulation is now a reality and is expected to expand further. From state-level agencies to nationwide or EU-wide regulations, and from agencies overseeing medical devices and consumer safety to labor practices, the regulatory landscape for AI is rapidly evolving.


AI regulation needs to be actively tracked. It is essential for VC firms engaged in significant AI development or exploration to assign someone responsible for closely monitoring these dynamic regulatory developments. Silicon Valley Bank’s failure serves as a cautionary tale. Operating without a Chief Risk Officer during a critical period of transformation in the technology and financial markets was a formula for disaster.



VC-Specific AI Governance


Clearly, as we have discussed in the last two episodes, ‘AI governance’ is moving full-speed ahead with ethical frameworks and legally-enforced regulations on AI business. And it’s clear that AI governance is also affecting the VC business. Similar to the Partnership for AI (focused on the major players in AI) which we discussed in the first episode of this series on Ethics (and who have just released their Guidance for Safe Foundation Model Deployment), another similar industry group focused on ethical AI has started. Responsible Innovation Labs has just launched a Responsible AI Commitments and Protocol, focused on startups and investors. 



Forty plus VC firms pledged their organizations to make reasonable efforts to:

  1. Encourage portfolio companies to make these voluntary commitments (see the Protocol here).
  2. Consult these voluntary commitments when conducting diligence on potential investments in AI startups.
  3. Foster responsible AI practices among portfolio companies. 


However, the RI Labs Protocol is not without some controversy, somewhat reminiscent of that Pause Giant AI Experiments open letter mentioned in the Ethics episode.


While RI Labs claims their Commitments and Protocol has been endorsed by the Department of Commerce (DOC), so far, they have only said; “We’re encouraged to see venture capitalists, startups, and business leaders rallying around this and similar efforts.” And, while RI Labs does count General Catalyst, Bain, Khosla, Warby Parker and other notable VCs amongst its members and signatories, The Information’s Jessica Lessin quotes Andreessen GP Martin Casado, who represents the vocal opposition (which also includes Kleiner Perkins partner Bucky Moore and Yann LeCun, Meta Platforms’ AI research chief ) as saying that, “…he could only think of two reasons why a VC would back the guidelines: They’re not a serious technologist or they’re trying to virtue-signal.”


Clearly the AI hype and governance battleground has reached the VC sanctuaries. While this provides plenty of PR fodder, it’s certainly true that the VC firms big and small are at the forefront of evaluating AI startups and opportunities, not only for financial risk, but also for ethical and regulatory risk. How should we organize – join PAI or AILabs, or develop our own guidelines – or all three? More than just guidelines, Andy McAdams of Byte Sized Ethics suggests creating our own AI Risk Scorecard, similar to the various AI indexes developed at Stanford HAI. Andy notes it may be difficult to find the data to evaluate and the emerging regulations will help, but they aren’t yet fully implemented.

Insights for Investors / VCs: AI Governance: Ethics & regulation will affect almost all founders.

What are we doing at Raiven Capital? AI Proposal Risk-Triage

‘Commitments’ and ‘Manifestos’ are great for developers (and Founders should have them if they are doing serious AI dev), and an AI heavy VC may want to have its own ‘Principles’, but as a VC firm evaluating proposals, what are the key items to look for in evaluating the AI governance issues and risk?



AI GOVERNANCE – FOUNDER PITCH REVIEW QUESTIONS

Examples of specific considerations to look for in gating an AI-centric investment:


1 – What is the place of AI in business strategy? 

Is AI a minor product feature, with pass through liability to an upstream core developer? Is original AI development part of the business? Or is this business itself a core AI product or service? In most of the existing regulatory frameworks this is an important consideration in the appropriate risk assessment.


A . If AI is a part of background operations, does it provide basic business efficiency?

  • You’re probably just a user with application risk, but NOT responsible for the product.


B. If AI is a major competitive differentiator, is it core to the business model?

  • There may be unique application product or service liability.
  • Significant competitive shifts in the tech or its regulations may greatly impact business model success, market share or financial returns.


C. Is this an AI business? 

  • There may be unique application product or service liability.
  • You’re a product / service creator. 
  • You likely have primary regulatory responsibility.

CONSIDERATIONS 

  • Obviously, AI product creators (C) have higher regulatory exposure.
  • Competitive differentiators (B) should be concerned about product and data rights. 
  • Use of AI ‘foundation models’ may imply business risk.
  • Private data models have more secure IP and clearer data rights.
  • In all cases there may be ethical issues.
2 – Is this a ‘moonshot’ or ‘deep-tech’? 

Lots of AI projects are ambitious and groundbreaking ideas or technologies with a longer time frame for development and commercialization, and higher business risk. Thus, they may not be appropriate for a shorter maturity portfolio strategy.

CONSIDERATIONS 

  • Operational AI is notably able to generate line of sight to positive cash flow, and clear exit strategies. 
  • Longer time frame speculative projects may be appropriate for very large multi-fund investment houses, or stand-alone mega projects, but of course imply greater business model risk.
3 – Is the use of AI legitimate and reasonable, or grossly overhyped?

Avoid investment opportunities that are exaggerating claims of AI in their product (note that this is a specific FTC red flag).

You’ve probably heard of the Turing Test. But have you heard of the ‘Luring Test’? The FTC is all over AI. In addition to the previous regulatory warnings on discrimination and unfairness, The FTC has explicitly warned against the “automation bias” potential of generative AI – ‘luring’ people with advertising disguised as truthful content. It is essential to track this source of regulatory risk.

CONSIDERATIONS

  • While much AI regulation is emerging, Section 5 of the FTC Act prohibits unfair or deceptive practices. 
  • As noted in Episode 2, the FTC has announced it is focusing on preventing false or unsubstantiated claims about AI-powered products, and joined with the Civil Rights Division of the US DOJ, the Consumer Financial Protection Bureau, and the EEOC issuing a joint statement committing to fairness, equality, and justice in emerging automated systems, including those marketed as “artificial intelligence” or “AI.” 
  • There’s always hype in the tech biz, but there is a clear federal mandate to police excessive / unsubstantiated AI claims.
4 – Is this business subject to the ethical considerations of AI

(e.g., inherent bias or discrimination issues or potential). These should be understood and addressed.

CONSIDERATIONS

  • Discrimination in many areas is strongly regulated without new legislation. 
  • Founders should consider:
  1. adopting one of the industries recommended ethical frameworks, 
  2. developing one internally, 
  3. and or joining the Partnership for AI.
  • Investors may want to create stated ethical principles on AI investment.
  • Using AI to address or prevent discrimination could be a key benefit.
5 – Is there an ‘AI Principles document’?

There are plenty of examples. If AI is a significant part of the product or service development, have a set of governing principles been adopted?

CONSIDERATIONS

If AI is a significant part of the business – competitive differentiation or an AI core product, it is essential to have statement principles. They are likely required under various regulatory components.

6 – What EU AI Act risk category is involved?

Even though it isn’t yet fully implemented, avoid unacceptable risk and consider the high risk category very cautiously (e.g., some high-risk areas such as life-critical medical applications, are already subject to clear guidance and regulations in the US and EU, and may be reasonable investments).

CONSIDERATIONS

There is enough preliminary data to know whether a proposed opportunity is likely to be categorized high-risk. Know your category!

7 – Is this business the subject of existing regulations that may be particularized for AI? For example, medical devices / FDA

CONSIDERATIONS

Don’t wait for the AI Act or the Executive Order to be in force. Many applications are already covered by existing regulatory agencies, especially in the US. Know your agency.

8 – Is there an AI compliance person? 

If the use of AI is significant, does the firm have a person specifically identified to track and govern AI (and other) regulatory guidance? This could be a shared role, but it should exist.

CONSIDERATIONS

Whether it’s about data privacy, or AI regulations, or banks in Silicon Valley, you should have an active designated compliance person!

9 – Consider team dynamics and related anti-patterns

At Raiven Capital, we understand that a key element of investment success is to closely vet the founder’s team, not only core for competence and a strong sense of curiosity, but team chemistry that includes a capacity for coachability. We specifically analyze for recurring patterns and anti-patterns that help identify investment risk and understand the founders and the opportunity.

CONSIDERATIONS 

  • In IP heavy technology firms, especially including AI, there are a couple of not uncommon anti-patterns to be wary of. Watch out for:
  1. No IP strategy. Not every startup will have the funding to pursue patents early, but if there is significant IP there should be some form of IP strategy. 
  2. IP self-dealing. At the opposite end are founders who, fully aware of the potential value of the IP, then attempt to segment and hide in another corporate entity. A ‘tech’ start-up should own its critical IP.
  • In any case, realize AI is increasingly regulated, and this is a moving target. Stay informed and anticipate evolving regulations.

The purpose of these questions is classic VC deal-gating plus potential further action or coaching. The following table gives some hypothetical examples. At this point in the regulatory rollout and stage of use, such questions would be premature to use in a scored calculation. Questions are color coded to indicate levels of potential risk. Almost no question will be an absolute deal killer. Prohibited EU risk activity e.g. facial recognition is okay for limited nonpublic security applications. Other red items such as not having an assigned compliance [person can be fixed during company buildouts. The only real deal killer is overhyped AI. These are FTC and SEC red flags. In these examples, company one appears ready to go even though it maybe EU high risk. Company two and three have correctable items should they represent otherwise good investments. Some of these are interdependent. For example, if the company is an “AI Business,” they must have a ‘Principles’ document.


a table that says ' al opportunity regulatory risk triage example ' on it

Insights for Investors / VCs: Adopt AI governance ‘deal gating’ to understand potential regulatory implications.

A Strategy for ‘Now’: ‘Aligning’ the EU and US risk models


How to create or invest in a business today, while the implementations of the regulations are still to be determined? Answering the questions above can help by anticipating what regulatory categories the investment will likely fall into. 


It’s possible, even necessary, to develop a preliminary strategy especially if thinking globally. The EU AI Act and the NIST AI RMF are the two key sets of guidelines that focus on ethical and responsible development and deployment of artificial intelligence. 


While they share many common goals, there are some key differences between the two frameworks. that aligns with the proposed categories in and (the key framework of the US AI Executive Order), and to anticipate how they impact a specific start-up company’s use of AI and its likely regulatory compliance. 


An individual founder may be subject to one or both. AI startups that want to operate globally can benefit by aligning their AI development and practices with both frameworks.


Here is a table that illustrates a high-level alignment of the approaches in the EU AI Act and the NIST AI RMF:


a table showing different types of risk management

For example, the table above illustrates the different approaches:


  • The overall Governance model strategy of the EU AI Act is ‘Risk Management’, while in the NIST AI RMF it’s more by ‘Governance and Oversight’. 
  • In the EU AI Act, the critical focus in High-risk AI is especially on the input / training data, and similarly NIST emphasizes the Data aspects of ‘Privacy, Ethics and Fairness’. 
  • Finally, the emphasis in Low-risk AI in both models is on basic Transparency, Explainability and Traceability – here it’s assumed the application is ‘OK’, but we want to be able to examine the results if there is some liability question. 


Real applications of the regulatory approaches are much more detailed, and need to wait for their implementation, but the table suggests that there will be reasonable synergy. By understanding how these categories align, AI startups can develop a comprehensive approach to regulatory compliance, aligning principles and practices with the emerging regulatory focus.

Based on this concept of aligning the regulatory strategies, here are four specific things AI startups can do to align their AI development and practices with both frameworks:


1. Develop a risk management framework

Both the EU AI Act and the NIST AI RMF require AI developers to identify, assess, and manage the risks associated with their AI systems. AI startups should develop a risk management framework that is tailored to their specific AI products and services. This framework should include processes for identifying and assessing risks, as well as for implementing mitigation strategies.


2. Implement data privacy and security measures

Both the EU AI Act and the NIST AI RMF require AI developers to protect the privacy and security of the data that they use and collect. AI startups should implement data privacy and security measures that are appropriate for the type of data that they use. These measures should include safeguards against unauthorized access, data breaches, and discriminatory use of data.


3. Develop explainable and traceable AI models

Both the EU AI Act and the NIST AI RMF require AI developers to make their AI models explainable and traceable. This means that AI developers should be able to explain how their AI models make decisions and be able to trace the data that was used to develop and train their models.


4. Be transparent about AI development and deployment

Both the EU AI Act and the NIST AI RMF require AI developers to be transparent about their AI development and deployment processes. This means that AI developers should provide clear and accessible information about their AI products and services, including the types of data that they use, the algorithms that they use, and the risks that are associated with their AI products and services.


By aligning their AI development and practices with both the EU AI Act and the NIST AI RMF, AI startups can position themselves for global success while meeting the growing demand for responsible AI development and deployment. Both of these frameworks are under implementation, with many details of regulation emerging over the next year. So, the key strategy is to understand how a particular company might be categorized and affected, and to track the emergence of the specific regulations. And for investors to look for this strategy.


In summary, the human intuitive component is critical in evaluating startups, managing portfolios and coaching founders. Private capital in inherently nontransparent and emerging AI is even worse from a technical perspective. At some point with more regulation and transparency there will be a larger role for algorithmic evaluation. Right now, in the VC ecosystem it’s especially human experts aided by technology scale. VCs have checklists not algorithms as the data is too often unknown or fragmented. Further, the application of AI would be very subjective as each startup is unique. The coming AI regulations is the first step in providing better context.

Insights for Investors / VCs: We’re in the emerging regulations phase – while details are to be determined. Start building a strategy now.

SECTION 4 – Key Outstanding Issues 


Finally, before we leave you, let’s go back to the beginning. We started this series addressing ‘The Letter’… “Pause Giant AI Experiments: An Open Letter”. Crafted by the Future of Life Institute and signed by many of the AI ‘hero-founders,’ who warn of the existential risk of Artificial Intelligence. Emily Bender cautioned against ‘Longtermism.’ While ignoring real current issues, her comments reminded us of a suggestive of a cyclical pattern in technology adoption, where fear and hype (FOMO) are instrumental drivers of decision-making. At the same time, Hinton, Yudkowsky and others suggest we are potentially much closer to AGI and ASI (Artificial General Intelligence / Artificial Super Intelligence) than the previous consensus.


FOOM or FOMO?


a black and white photo of a robot looking at the camera

Recently, this battle exploded onto our screens again, with a flood of insider leaks as to why the BOD of OpenAI fired Sam Altman, CEO of OpenAI, lost him to Microsoft, and then hired him back. And two more ‘letters’, one signed by the vast majority of OpenAI workers threatening to quit, and another rumored to explain the threat of Altman’s course of action. At the root of the existential angst, and the competition between the Big-Tech AI players, is the concept of FOOM. And this acronym has a mysterious double entendre …


–    Fast Onset of Overwhelming Mastery – that once you cross some initial AGI trigger, ASI will come almost instantly.


–    First Out Of-the-Gate Model – that the first to achieve near AGI will have an unassailable lead. 


Those concerned with the existential risk of AI believe it may happen too fast to control, and those concerned with AI corporate success want desperately to be first. This leads us back to that other acronym mentioned in Episode 1 – FOMO, Fear Of Missing Out, from David Noble’s historical analysis of workplace / factory floor automation. Which one is it? FOOM or FOMO? Essentially both. There are real threats of moving too fast on AI, and there’s certainly a FOMO cloud of super-hype alluded to by Bender, the perceived threat of moving too slow.


Insights for Investors / VCs: Understand the risk, beware of the hype.

Be Careful the Corporate Structure

To address the existential risk of AI and to move as quickly as possible to commercialize, OpenAI evolved a complicated structure with a BOD overseeing a not-for-profit to watch over the corporate enthusiasm, like a built in red-team. 



On their website, they state: “We designed OpenAI’s structure—a partnership between our original nonprofit and a new capped profit arm—as a chassis for OpenAI’s mission: to build artificial general intelligence (AGI) that is safe and benefits all of humanity.”We won’t try here to analyze the ‘existential threat’ / effective altruism arguments of ‘Altman was fired, he’s hired away, he’s back’ drama. For one detailed analysis of what went down from the perspective of trying to protect the world from ‘existential risk’ AI, see Tomas Pueyo’s post on his Uncharted Territories blog OpenAI and the Biggest Threat in the History of Humanity. But the key issue to point out here is if, as a VC / Investor, someone shows you an org chart like thistake note.

a diagram of a company 's corporate structure

If a Board of Directors controls a Not-for-Profit (501C3), that controls a GP LLC, that controls a holding company and a separate capped-profit LLC, then you should really understand the governance risk and be prepared for chaos. The intended goal was to institute ethical non-profit control over an AI startup focused on commercial opportunities, and on the verge of existential risk. In the end it created headlines and may not have ultimately achieved the desired goal.

Insights for Investors / VCs: Beware Russian-Doll corporate structures.

The Who and What Gets Regulated Challenge: GPAI / Foundation Models

a group of people are sitting around a table having a meeting .

Determining who should be regulated is a special challenge. The goal is to strike a balance between regulating AI to ensure safety without stifling innovation. Efforts are being made to allocate regulatory responsibility appropriately within the technology life cycle, limiting burdens on small innovative businesses while assigning primary liability to larger development institutions rather than individual users or smaller customers. Regulation aims to encourage innovation while ensuring safety.


There is a continuing argument by large General Purpose AI (GPAI) developers (e.g., OpenAI, Google Meta, Microsoft) that they are only building tools, should not be regulated and should be considered low risk. This contention was a major sticking point in the finalization of the EU AI Act. The resulting compromise is to try to regulate some level of associated intrinsic applications risk at the developer of the GPAI or what’s known as Foundation Models. A foundation model is a large-scale machine learning model that is pre-trained on an extensive dataset, usually encompassing a wide range of topics and formats. They include OpenAI’s GPT and DALL-E, and Google’s BERT (Bidirectional Encoder Representations from Transformers) and T5. 


These models are designed to learn a broad understanding of the world, language, images, or other data types from this large-scale training. An MIT Connection Science Working Paper suggests that the market competition amongst the major foundation models is a major battle ground, similar to, but perhaps more important than the browser, social media and other ‘platform wars’ that preceded them. The Brookings Institution suggests the potential market for foundation models may encompass the entire economy, with high risks of market concentration.

Foundation Models are a regulatory challenge given that they are basic technology with many diverse applications. It’s typically the applications that are the regulatory risk, but owing to the scale, complexity, rapid technological advancement, lack of transparency and global reach, regulators have focused on how to assess the development, management and control the cumulative risk of these foundation models. This is both a scientific and regulatory challenge that will not be fully decided anytime soon. 


The ‘good news’ on the self-regulatory front is that the PAI (Partnership for AI – which includes the majority of foundation model developers, e.g., OpenAI, Google, Microsoft, IBM, Meta…) has just released their ‘Guidance for Safe Foundation Model Deployment. A Framework for Collective Action’. This effort was started in April by PAI’s Safety Critical AI steering committee, and made up of experts from the Alan Turing Institute, the American Civil Liberties Union, Anthropic, DeepMind, IBM, Meta, and the Schwartz Reisman Institute for Technology and Society. While there are ‘longtermists’ and “effective altruism” advocates at PAI, PAI is more focused on immediate governance, best practices, and collaboration between different stakeholders in the AI ecosystem. 


HOORAH! Substantive progress is being made on this important topic.


The ‘better news’ is that the group that is chartered to establish the EU framework for regulating foundation models has just passed a decision. In mid-October 2023, it was announced that EU countries were settling on a tiered approach to regulating foundation models – the bigger developers and projects would get more strict attention. Then, in mid-November, France, Germany, and Italy asked to retract the proposed tiered approach for foundation models, opting for a more voluntary model, and causing a deadlock that put the whole legislation at risk if not resolved. Finally, on December 9th the EU council presidency and EU Parliaments negotiators reached a provisional agreement – including specific cases of General Purpose AI (GPAI) systems, and a strict regime for high impact foundation models. They also mandated an EU AI office, AI board and stakeholder advisory forum to oversee the most advanced models.


The ’surprisingly encouraging news’ is that there is uncharacteristic agreement emerging between the US and EU implementation. As noted elsewhere in this series, many of these regulatory points are highly technical, not encouraging when dealing with politically charged issues. Because the US Executive Order uses a jurisdictional entry point of ‘dual use’ (militarily significant) technology, it has emphasized defining this cutoff point. Following President Biden’s Executive Order approach, the EU high-risk foundation model definition will similarly apply to models whose training required 10^25 flops of compute power – the largest LLMs. This level of global regulatory alignment at this stage of technology development is unprecedented and encouraging. There are many other similar points of technical alignment between the US and the EU. Most big-tech companies will be disappointed at the strong regulation, but reluctantly happy that it is globally consistent. Of course, tech like synthetic training and quantum computing, will shift any computational definition of high-risk, but the mechanism is in play to define that current relevant ‘speed-limit’.


The ‘bad news’ – a study performed by the Stanford University Center for Research on Foundation Models (CRFM) assessed the 10 major foundation model providers (and their flagship models) for twelve key EU AI Act requirements, and finds that none of these popular foundation models comply at this time with the preliminary AI Act rules on foundation models.

Insights for Investors / VCs: Closely Track the Regulation of 'Foundation Models.'


  • Understand the regulatory risks of developing ‘foundation models’
  • Understand the licensing risks of using ‘foundation models’
  • Understand the commercial dependencies implicit in ‘foundation models’


Explainable AI – XAI


As we mentioned in Episode 1, the key difference mentioned by Floridi and Cowls between AI and previous technology is “Explicability: Enabling the Other Principles through Intelligibility and Accountability”. This is what’s referred to as XAI – Explainable Artificial Intelligence. XAI refers to the ability of AI systems to provide explanations for their decisions. However, current generative language models like ChatGPT, like most machine learning (LM) often struggle to offer explicit explanations for their responses due to the nature of their training.


“As a language model, I generate responses based on patterns and information learned during the training process, but I do not have the capability to provide explicit explanations for how I arrive at a specific answer. I do not have access to my internal processes or the ability to trace and articulate the specific reasons behind each response.” ChatGPT3. So, in effect these LLMs, like most Machine Learning AI, fail one of the near universally agreed on principles of AI – Accountability / Explainability. Research published by the Stanford Center for Research on Foundation Models (CRFM) and Stanford Institute for Human-Centered Artificial Intelligence (HAI) offers a Foundation Model Transparency Index and concludes that “the status quo is characterized by a widespread lack of transparency across developers”, and that transparency is decreasing.


Insights for Investors / VCs: It’s an open question as to whether AI itself might ‘police’ AI.

Can AI regulate AI?


The concept of using AI to regulate AI is a significant challenge, not relying on guiding principles in its development, or in legal arbitration after a failure, but use AI real time to regulate AI. This concept has been advanced by, among others, Bakul Patel, former head of digital health initiatives at the FDA and current head of digital health regulatory strategy at Google. Patel said recently “We need to start thinking: How do we use technology to make technology a partner in the regulation?”.


Notably Google’s Principles of AI (like many of the core developers) specifically calls out accountability: “Be accountable to people. We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal. Our AI technologies will be subject to appropriate human direction and control.”


Of course, there are special ethical challenges with this suggestion, but the fundamental issue is we can’t scale the human regulatory oversight for manual approaches.

Insights for Investors / VCs: It’s an open question as to whether AI itself might ‘police’ AI.

Overall Conclusion: Stay Vigilant


The governance of artificial intelligence – from the standpoint of technology, organization, ethical frameworks and the regulation – is in constant flux. The major lesson is that governance of AI in relation to the venture capital ecosystem is a moving target. 


It’s not possible to just create one framework, follow one set of regulations, or analyze just one opportunity. The nature of this radical entrepreneurial time in our techno-history is a need for constant vigilance by technologists, the regulators, and investors.

What's Next? Quantum AI?

a robot head is sitting inside of a google logo

Finally, before we leave you, let’s go back to the beginning, when we discussed the impact that LLM generative AI ChatGPT chatbots will have on the Internet search industry, advertising and website placement, etc. 


Clearly, artificial intelligence has been at a major pivot point. This inflection point was the cause for a ‘red alert’ at Google which focused all its energy on responding to this competitive issue. There are two potential additional variables that may radically affect the current AI-wars: synthetic training and quantum computing. 


Much of the recent discussion on more rapid AGI/ASI threats is the move toward synthetic training of generative AI. Training Machine Learning models is time consuming and costly in data rights, as well as server and energy costs. But what if you don’t need real data? Speeding up and lowering the cost to train these models could dramatically increase their speed. Microsoft has recently released data on their Orca2 project which uses synthetic training. This could reduce the data lead that Google has over Microsoft.


Another major innovative technology that represents a potential existential challenge is quantum computing. Training foundation models of machine learning takes huge amounts of computing power (this is often cited as a regulatory trigger). Here, too, Google is an innovator. 

The Google Sycamore project is a significant practical application of quantum computing technology, reducing the computational resources for training and parallelizing the work load to computationally address, significant problems, challenges, and opportunities in the world today. And IBM just released their new Quantum System2.


If you want to look into the future and freak out over the risk and opportunity associated with Skynet, the Terminator, Robocop, etc, then consider the FOOM impact of synthetic training and quantum computing on accelerating artificial superintelligence (ASI). The big tech players positioned to combine AI with quantum are the same previously discussed internet search / AI competitors -Google and Microsoft, plus IBM and Alibaba.

Insights for Investors / VCs: The VC ecosystem provides the innovation engine for the future.

New ParagraphBusinesses that accelerate the combined revolution of new technologies, i.e., artificial intelligence and quantum computing, will provide the impetus for even larger radical changes in the tech business and the economy in general, and these will come from the venture capital landscape. 


While many of the startups like OpenAI and DeepMind, may eventually be acquired by Big Tech, innovation is still the primary domain of startups and venture capital, and the entrepreneurial spirit of the modern fifth industrial economy.




In Summary, 


The VC community needs to stay vigilant and proactive in navigating the complex and evolving terrain of AI. By monitoring AI regulation, embracing safety considerations, and making informed investment decisions, VC investors can position themselves for potential success in this rapidly advancing field.


an aerial view of a picture frame in the middle of a park, Dubai
By Raiven Capital 05 Feb, 2024
Raiven Capital recently announced the launch in Dubai of its Dubai International Financial Center (DIFC) based $125M USD tech venture fund. We are proud of this accomplishment, and can’t wait to have new investors become part of our fund.
a robot wearing headphones is thinking about something .
By Dr. James Baty 05 Oct, 2023
by Dr. James Baty PhD, Operating Partner & EIR and Tarek El-Sawy PhD MD, Venture Partner RAIVEN CAPITAL This is our second release in our series on governing Artificial Intelligence – AI Promise or Peril This release consists of two sections AI Regulatory Challenges & Models AI Regulation Examples & Issues The third release will cover AI from a VC perspective. The first release on AI Ethics is available here . In the inaugural episode of our ‘AI, Promise or Peril’ series, we delved into the clamor surrounding Artificial Intelligence (AI) ethics—a field as polarizing as it is fascinating. Remember the Future of Life Institute’s six-month moratorium plea, backed by AI luminaries? Opinions ranged from apocalyptic warnings to messianic proclamations to cries of sheer hype. We observed that the answer to AI governance isn’t a one-size-fits-all solution; rather, it’s a cocktail of corporate self-governance, industry standards, market forces, and international legislation—sometimes with AI policing itself. Our journey began with the first episode dissecting the growing landscape of AI ethics frameworks, concluding that the quest for the perfect blend of public, private, and governmental guidelines is only just beginning. In today’s installment, let’s attempt an overview of formal AI regulation, including challenges in regulating AI, primary regulatory models, key regulation implementation, and outstanding issues. No single guide to AI governance can cover everything. Our goal is a meta-guide that highlights the key issues, actors and approaches, with an idea to be informed enough to evaluate how AI impacts the VC ecosystem.
a robotic hand is reaching out to a human hand .
By Dr. James Baty 18 Jul, 2023
By Dr. James Baty Advisor, Raiven Capital The headlines around AI are screaming for attention: Launching yet another AI company, Elon announced on his Twitter Spaces that he had warned Chinese leaders that digital superintelligence could unseat their government!, Other headlines herald the coming super-positive impacts on world economies: Goldman Sachs noted that generative AI could boost GDP globally by 7 percent. Amid calls for more regulation, the debate surrounding artificial intelligence has taken a multifaceted turn, blending apprehensions with aspirations. The fears and uncertainties often act as catalysts for attention and advancement in AI. The technological prowess, the risks, the allure – it’s all a heady brew. While some workers clutch their paychecks fearing obsolescence, shrewd employers rub their hands together asking, “Can AI trim my overhead?” In this three-part Dry Powder series, I will deconstruct the issues around AI governance: ethical frameworks, emerging governmental regulation and the impact AI governance is having in venture capital funds. As a technologist, my career designing and advising on large-scale tech architecture strategy has leveraged and suffered the previous two of Kai-Fu Lee’s ‘Waves of AI’. Clearly this third wave is big. Setting the Stage: Ethical Principles of Artificial Intelligence The question of AI safety and regulation has sparked heated discussion globally, especially as AI adoption spreads like wildfire. Despite calls for more regulation, the Washington Post reported that Twitch, Microsoft and Twitter are now laying off their ethics teams , adding to the industry’s increasing dismissal of those leading the work on AI governance. This should give us pause: what are the fundamental ethical principles of AI? Why are some tech executives and others spending millions to warn the public about it? Should it be regulated? Part of the answer is that fear sells. In part, AI is already regulated, but more of it is on the way. First, Let’s Discuss “The Letter” Enter the March storm: Pause Giant AI Experiments: An Open Letter . Crafted by the Future of Life Institute , and signed by many of the AI ‘hero-founders,’ (who warn us about AI, while they aggressively are developing it), this letter thundered through the scientific and AI community. There were calls for a six-month halt to AI research, while the red flag of existential threats was raised. The buzz generated by the letter was notable. But, hold the phone! Forgeries appeared among the signatures, echoing ChatGPT’s infamous “ hallucinations .” Moreover, some of the actual signatories backtracked. Critically, many experts in AI research, the scientific community, and public policy underscored that the letter employed hype to peddle technology. Case in point, Emily M. Bender, a renowned neurolinguistics professor and co-author of the first paper cited in the letter, expressed her discontent. She called out the letter for its over-the-top drama and misuse of her research, coining it as “dripping with #AIhype.” Bender’s comments are suggestive of a cyclical pattern in technology adoption, where fear and hype are instrumental drivers of decision-making. As technology historian David Noble documented, the adoption of workplace and factory floor automation that swept the 1970s and ‘80s was driven by managers’ competitive fear that came to be known as FOMO (‘Fear of Missing Out’). Prof. Bender’s critique points to ‘longtermism,’ a hyper-focus on the distant horizon, while eclipsing more urgent current issues of misrepresentation, discrimination, and AI errors. Still the legitimate question remains, how should artificial intelligence be governed? How Should AI Be Governed? As we explore the labyrinth of AI governance, it’s imperative to first recognize the importance of ethical and safety principles in its development and implementation. Similar to other technologies, there are already in place industrial practices guidance and regulation of AI, not only for basic industrial safety, but also for ethics. AI poses unique challenges compared to previous technologies, necessitating tailored regulations. Determining how to regulate it involves more than just legal measures by governments and agencies. How do we develop an overall technical framework for AI governance? In 2008, Prof. Lawrence B. Solum from the University of Illinois College of Law published a paper that analyzed internet governance models. These include the different models of self-governance, market forces, national and international regulations, and even governance through software code and internet architecture. This framework can also be applied to AI governance. Considering the full range of mechanisms — industry standards, legal frameworks, and AI systems regulating other AI systems. Governance necessitates not one form, but a comprehensive approach with multiple models of regulation. It requires long-term considerations, yet must address short-term immediate challenges so that it ensures responsible and ethical development of AI. By integrating industry standards with legal frameworks and technology-specific regulations, we can work towards creating a sustainable and ethical AI ecosystem. What are the Key Principles for Ethical and Safe AI? The past decade has been marked by a surge in technical and public policy discourse aimed at establishing frameworks for responsible AI that go far beyond “ Asimov’s Three Laws ,” which protect human beings from robotics gone awry. The plethora of notable projects includes: The Asilomar AI Principles (sponsored by the Future of Life Institute), The Montreal Declaration for Responsible AI, the work by IEEE’s Global Initiative on Ethics of Autonomous and Intelligent Systems, the European Group on Ethics in Science and New Technologies (EGE), and the ISO/IEC 30100:2018 General Guidance for AI. These undertakings have subsequently inspired specific corporate policies, including, for example, the Microsoft Responsible AI Standard v2 and the BMW Group Code of Ethics for AI. There are so many other notable attempts to provide frameworks, perhaps too many. A useful cross-framework analysis by Floridi and Cowls examined six of the most prominent expert-driven frameworks for governing AI principles. They synthesized 47 principles into five: Beneficence: Promoting well-being, preserving dignity, and sustaining the planet. Non-Maleficence: Focusing on privacy, security, and exercising “capability caution.” Autonomy: Upholding the power of individuals to make decisions. Justice: Promoting prosperity, preserving solidarity, and avoiding unfairness. Explicability: Enabling the Other Principles through Intelligibility and Accountability. These principles provide a framework to guide ethical decision-making in AI development. That last one is AI’s distinctive stamp on the ethical spectrum. AI should not just ‘do,’ it must ‘explain.’ Unlike most previous technological advancements like the similar foundational principles of bioethics, artificial intelligence should be required to explain itself and be accountable to users, the public, and regulators. Are These Principles Being Implemented? Yes. Virtually all major companies engaged in artificial intelligence are members of the Partnership on AI and are individually implementing some form of governing principles. The partnership comprises industry members (13), nonprofit organizations (62) and academic institutions (26). It also is international, operating across 17 countries. The community’s shared goal is to collaborate and create solutions that ensure AI advances positive outcomes for people and society. Members include companies such as Amazon, Apple, Google, IBM, Meta, Microsoft, OpenAI, and organizations like the ACM, Wikimedia, the ACLU, and the American Psychological Association. Notably, large global corporations that have implemented such principles are complex global entities. They require parallel implementation by division or geography. For example, AstraZeneca, as a decentralized organization, has set up four enterprise-wide AI governance initiatives, including: overarching guidance documents, a Responsible AI Playbook, an internal Responsible AI Consultancy Service & Resolution Board, and the commissioning of AI audits via independent third parties. AI audits are a key part of any compliance structure, and are recommended in many frameworks. This enterprise model is a sort of ‘principles of AI principles’. AI Ethics: A Form of Governmental Competitive Differentiation In establishing governmental principles, Europe is a trailblazer. In September 2020, the EU completed its EAVA ethical AI framework . The key conclusion: by exploiting a first-mover advantage, a common EU approach to ethical aspects of AI has the potential to generate up to €294.9 billion in additional GDP and 4.6 million additional jobs for the European Union by 2030. Governments can feel FOMO too. The framework emphasizes that existing values, norms, principles and rules are about governing the action of humans and groups of humans as the key source of danger, not designed for algorithms. The EU warned “the technological nature of AI systems, and their upcoming features and applications could seriously affect how governments address four ethical principles: respect for human autonomy, prevention of harm, fairness, explicability.” Literally every government is adopting some form of ethical AI framework. The 2018 German AI strategy contains three commitments: make the country a global leader in AI, protect and defend responsible AI, and integrate AI in society while following ethical, legal, cultural and institutional provisions. Similarly, the 2019 Danish national strategy for artificial intelligence includes six principles for ethical AI: self-determination, dignity, responsibility, explainability, equality and justice, and development. It also provides for the establishment of a national Data Ethics Council. In 2021, the US launched the National Artificial Intelligence Initiative to ensure US leadership in the development and use of trustworthy AI. In 2022, the White House Office of Science and Technology Policy issued the Blueprint for an AI Bill of Rights.This June, the European Parliament passed the European Artificial Intelligence Act , which not only regulates commercial use of AI, but sets principles addressing government use of AI (e.g., limiting national surveillance technology). But What About Military AI? In most dystopian AI fiction, military AI takes over. We’re especially worried about Colossus, Skynet and Ultron, the most evil AI presented in film. In real life, most nations provide for separate governance of AI for defense and security. In 2020, the US Department of Defense, Joint Artificial Intelligence Center, adopted AI Ethical Principles for governance of combat and non-combat AI. The five principles are that AI is responsible, equitable, traceable, reliable and governable.
a person is holding a green leaf in their hand .
20 Sep, 2022
As the United Nations General Assembly (UNGA) debates opens in New York City with the theme of “A Watershed Moment,” the world faces unprecedented and interconnected crises: a tipping point for climate change, the global pandemic, war in Ukraine, and runaway inflation. Despite these seemingly intractable problems, the UN argues that there are transformative solutions to these crises. We wholeheartedly agree. As investors, it is our job – a critical one – to seek innovations that solve real-world problems, especially in terms of environmental, social and governance ( ESG ) goals. Several of our investments actively tackle issues that are high on the agenda of the UN’s Sustainable Development Goals . With this in mind, it is timely that we give a progress report in this mid-investment period: much of it exceeds our expectations, especially within ESG. As a firm, we actively support women . Four out of eight of our venture partners are women, and several of our companies are female-founded and led: Scopio , Whizmo and Vertical Harvest . A few high notes on our investments: Vertical Harvest (Jackson, Wyoming) Vertical Harvest actively works to promote sustainable agriculture that is local, and uses 90 percent less water and land. Consider the negative environmental impact of industrial agriculture and how climate change is impacting agricultural prices , and it is clear why farm efficiency is needed. Vertical Harvest’s urban vertical farms use AI and IoT to improve yields and build smarter cities in locations across the US, starting with its first farm in Jackson, Wyoming and planned expansion across the US. A whopping 100,000 pounds of produce are produced per year from only a tenth-of-an-acre plot of land and its employment scheme provides a model for the industry, as Vertical Harvest’s staff consists of 40% neurodiverse employees. The company was recently profiled on CBS This Morning , and was nominated by Fast Company as one of the Best Workplaces for Innovators . Elevated Signals (Vancouver) Elevated Signals is an AI-driven enterprise software platform that radically streamlines controlled environment agriculture (CEA) operations from seed to sale, increasing the environmental impact of CEA companies. Consider the impact better connectivity has on agriculture: a McKinsey report which chronicles the ways improvements in technology can yield massive agricultural growth: “Artificial intelligence, analytics, connected sensors, and other emerging technologies could further increase yields, improve the efficiency of water and other inputs, and build sustainability.” The report also states that greater connectivity would yield $500 billion in additional value in terms of global gross domestic product by 2030, creating efficiencies that would alleviate pressure on farmers. WayOut (Stockholm, Sweden) Wayout is an IoT enabled water purification technology system that fits within a shipping container, can be rapidly deployed almost anywhere on Earth and provides 3,000 people every day with perfect drinking and cooking water. Wayout leads the way in terms of providing a direct method of disrupting obsolete and aging infrastructure to serve the needs of people in remote areas. With IP complete, deployed systems and orders in place, and partners such as SIEMENS, Alfa Laval and Ericsson, it will meet orders globally. Profiled in WIRED magazine, WARP news, Forbes , the company aims to rid the world or water scarcity and stress, a massive problem to tackle, considering 1.1 billion people lack access to clean water, and much of the world still suffers from water-borne illness, such as cholera and typhoid fever. Whizmo (Canada/UAE/Costa Rica) A peer-to-peer mobile money platform that empowers the unbanked to receive, transfer, remit and pay value without having a bank account in emerging economies. Whizmo is like m-pesa for emerging economies such as Dubai and Costa Rica. Whizmo saves a great deal of time and money for people who otherwise wait in long lines to receive money, make remittances, or have to pay high fees. Consider the opportunities of financial inclusion , and its ability to lift billions out of poverty – especially women – as 1.4 billion adults have no access to banking according to the World Bank. Scopio (Los Angeles and New York) Female founders Christina Hawatmeh and Nour Chamoun are making waves in the creator economy, with their company Scopio, aka Scope it out. Allowing photographers from anywhere to sell their photos everywhere, it makes images – NFT, photos, and art – more accessible and diverse. The company’s platform empowers artists and creators in far-flung corners globally and Scopio recently published a book featured in Entrepreneur: The Year Time Stopped with HarperCollins. CEO Christina Hawatmeh was listed in the top 15 Entrepreneurs to follow in 2021 by New York Finance and co-founder Nour Chamoun was featured in the Forbes 30 Under 30.
a black and white photo of a bridge with a cloudy sky in the background .
20 Sep, 2022
Preamble Raiven Capital is a global early-stage technology venture capital fund that believes in the power of innovation. The fund seeks to strengthen the ecosystems that it invests in, building bridges and contributing to thought leadership in venture capital. The goal is to foster innovation and provide insight to founders across the world, contributing to their operational playbooks. This whitepaper summarizes findings from Raiven’s inaugural research project. In a world where the future of work is already here, we wanted to provide deeper insights into the changing work world. In addition, how do we understand and operationalize insights from across the literature? Raiven’s larger goal is to take a deeper look at how to support tech companies and navigate the landscape of remote work in a more nuanced way. Abstract The current state of business and organizational literature looks at the future of work and addresses best practices in hybrid and remote work, the changing workplace culture and the leadership required to lead in this complexity. Five startups participated in a two-hour scenario and transcripts of the sessions were analyzed according to established qualitative methodologies. The timed scenario involved a challenging situation a tech company would normally face. Partway through the scenario a change that increased the stress/urgency of the situation was introduced. Once the scenario was completed, the teams were each invited to debrief. A new theme that emerged from the research: an organization that creates a culture map for the whole team that lives within it and is based on a high emotional intelligence, is able to navigate the stressors of a remote work environment with more clarity, innovation and ease. In addition, the clearer the map for external relationships, the clearer the strategy is to address challenges that arise due to client needs. Companies that had a clear internal and external relationship map had better success. The tools bridged the gap of remote work to create better cohesion and symmetry within teams. Culture maps (internal) and relationship maps (external) became the “glue” that helped remote teams generate trust, communicate effectively, and work more efficiently and innovatively within teams. Background The future of work is a topic often addressed in business and tech literature. Discussion includes many trends and “how to” guides. Much of it focuses on a post-pandemic phenomenon: “The Great Resignation” which includes the 40% of tech workers that have left or planned to quit (Deczynski, 2021). The pandemic reinforced individuals’ beliefs – that their life had value and mattered. Work was not just a place to be. Now, employees want money, benefits, flexibility not just in days, but in how they spend their hours. They want a workplace that fosters creativity and collaboration, and offers a healthy workplace culture (Deczynski, 2021, Downs, 2021). People quit six-figure jobs to prioritize mental health and travel. They also began searching for fulfilling, flexible careers. The cult of being busy is no longer acceptable. People want life-work integration (Groff, 2021, Fox, 2021). Even the four-day work week is passé. The broader question: What does the organization need to actually “work?” How does it collaborate, meet individual differences and needs, foster productivity and happiness (Collin, 2021, Weikle, 2021)? How do leaders and organizations respond to navigate the complexity? The current literature falls into three main categories: hybridized or remote work, workplace culture, and leadership. Hybridized or Remote Work The literature reinforces that remote, or asynchronous environments can work, but what is critical is creating a culture that values it. Remote work will remain and account for 48% of the workforce by 2030 (Bleimschien, 2021). Communication is key. Team members must learn to navigate and communicate effectively across time zones and cultures. Collaboration styles may differ. Private communication is not helpful most of the time.Transparent work where others can see one’s work and pick up where others left off make most meetings unnecessary. If there are meetings, who attends? Are there minutes for review? Agreed upon deliverables provide measurable results (Tucker, 2021). Meetings should be in response to specific milestones; this fosters productivity (Carr, 2021). Each participant should have clear engagement and understand how to engage in advance of the meeting so that meetings are productive, and have full participation and buy-in (Steen, 2021). Note: a seamless flow of information between teams and immediate access to information and use of all technology is required for full participation (Bleimschein, 2021). Does remote work “work”? According to the research, it is built for cooperation not collaboration (Ramesh, 2021). Others argue that short-term productivity goes up, but long-term creativity goes down because there is a damper on collaboration and innovation. Real-time conversations and corridor chat are not often replicated in the virtual world (Stillman, 2021). Mentorship and developing new friendships become more difficult (Glasser, Cutler, 2021). Unplanned “collision conversations” are needed. These types of conversations are exciting, full of brainstorming and innovations (Glasser, Cutler, 2021). The literature states: “Workers want to work with people they like and in systems that engage them for the money they think they’re worth. A virtual experience that fails to deliver professional fellowship and intriguing challenge, will cause a worker to feel unconnected from their work.” Intentional collaboration is key (Carr, 2021). People need to make sense of things with one another. Humanizing experiences engender belonging and trust (Hobson,2021). Employee experience is just as important and can include virtual water cooler and townhall discussions, as well as socializing instead of agenda-based meetings (Hobson, 2021). Building a hybridized and remote work environment is not a one-size-fits-all. It is important to understand what models enable the best workflows, despite the history of the organization. Workplace Culture Workplace culture consists of tacit agreements about values, ethics and operations that shape the attitudes and behaviors within an organization. They define what is encouraged, discouraged, accepted nor rejected within a group (Groysberg, Lee, Price & Cheng, 2018). In post-pandemic digital workplaces, the culture is being redefined. Some workplaces always strived for ethics, integrity, wellness, creativity, diversity & inclusion. These values are now in high demand from employees and have become a factor in company resignations. Employees and investors value integrity. Honest brands are valued (Schwates, 2021). Employees want to belong and be recognized for accomplishments, while being able to accommodate family obligations (Ramesh, 2021). Mike Prokopeak, editor in chief of Reworked, noted that in the digital workplace, the human element of work is the most important. “The team we build, the people we develop and support, and the mission we choose to pursue together…It’s the values we share, the dreams we pursue together, and the quality of our relationships that will define whether or not we succeed.” (Rodgers & Nicastro, 2021, Williams, 2021). In other words, technology will only get us so far. The capacity to collaborate is the biggest predictor of a remote team’s success. Emotional intelligence is needed: who is best at what? What else is important: clearly defined leadership, communication, coordination, transparency, time management and responsiveness. Emotional intelligence is needed for effective meetings (Moore, 2021) and to navigate what is needed for remote work. Social skill and social perceptiveness are relevant too (Riedl, Malone & Woolley 2021, Moore, 2021). Leadership In the future of work literature, leadership needs to understand that remote and hybridized work is here to stay and requires a reframing: work is not just hours given. An employee’s engagement needs to be meaningful, not rote. It is important to shift to valuing productivity over longer days (Ramesh, 2021). Leaders need to challenge the assumptions that underlie facetime (Stockpole, 2021), and they need to respect time outside work. There must be mutual trust that work will get done, especially with flexible hours. An idea: evaluate work based on productivity, not time (Stewart, 2021, Rodgers & Nicastro, 2021). Leaders need to understand that creativity comes from incubating and shifting focus, so downtime is equally important to increasing productivity (Nova, 2021). This shift in thinking allows for upward mobility despite remote locations. Leaders need to think human centric and must have the skills to manage human differences (Carr, 2021). Creating a healthy and productive workplace culture is related to emotional intelligence. Creating a team of lifelong learners who possess self-leadership and interpersonal engagement is also key (Rodgers & Nicastro, 2021). Further, hybrid or remote work must be accessible to all, whether it be equipment for a home office, training on technology, or accessible child care, especially for women. Women end up being responsible for the majority of unpaid work for child care, home care or elder care when working from home (Youn, 2021, Hobson, 2021, Rooney, 2021). The Gap and the Research As was noted before, the literature states that the future of work is here, stressing that key high-level assumptions are shifting. Actions and attitudes need to follow in response. Technology is already at the cutting-edge of many of these trends. Qualitative analysis allows for a small sample size to show repeated themes. These reveal new insight that can only be found in listening to experience and deeper verbal content versus a question and answer-type analysis. The simulation began with a situation each team would address in their day-to-day operations, somewhat stressful, with some time pressure. Then, at the 30 minute mark, they were thrown a curveball. The new information in the scenario significantly increased the pressure. Each team had 1.5 hours to approach and resolve the situation together. Then, each team had the opportunity to debrief and reflect on their experience and their individual performance. Each section (the initial scenario, the curveball, and the debrief) revealed aspects about the team, their communication, their collaboration, and their ability to navigate remote work. What became clear: the less noise and more creativity possible through a seamless ability to navigate remote work, the more capacity the organization has to pursue its goals. The Results The results reinforced, but also went beyond the literature. The companies that solved the scenario well – understanding instructions, collectively and efficiently coming up with an approach and response to the initial scenario, approaching the crisis in a clear, calm way with an integrated solution – reinforced the current literature about remote work. The others failed to come to a solution. However, two crucial themes also emerged. The companies that were able to solve the scenario easily and successfully had a very clear internal culture map , which is the foundation for a very clear external relationship map . “Culture is the tacit social order of an organization: It shapes attitudes and behaviors in wide-ranging and durable ways. Cultural norms define what is encouraged, discouraged, accepted, or rejected within agroup. When properly aligned with personal values, drives, and needs, culture can unleash tremendous amounts of energy toward a shared purpose and foster an organization’s capacity to thrive” (Groysberg, Lee, Price & Cheng, 2018). When culture is embedded throughout the layers of the organization with behaviors, values, processes and operational actions and strategy, a “map” is created for the organization. It is a silent language that guides the journey, leading to ways that stakeholders, customers and clients relate, helping define the experience from beginning to end. The responses to company scenarios reflect that the culture map (inward facing) and the relationship map(external facing) become a glue that helps mitigate and manage conflict in remote work, communication and relationships. Specifically, as reflected in the literature, those companies that expressed emotional intelligence (self-perception, interpersonal skills, problem-solving and stress management skills) allowed a foundation for managing day-to-day relationships and approached crisis in a consistent and more efficient way that helped the teams feel connected and productive while giving a general sense of well-being. Beyond the mission and vision of a company, companies that handled the scenario most successfully had integrated and owned their vision. They built the culture map by sharing the vision collectively and individually. It did not exist just with a single or few people. There was a common sense of shared values that existed from a culture that engaged its people in an ongoing co-creation. While the vision may have stemmed from one person’s idea, the culture was now jointly held, at least to the degree of each person’s role. When the role of each team member was very clear. Each person knew each other’s strengths and weaknesses and how they fit together. There was clear camaraderie that allowed them to understand each other in nuanced ways. There was a shared understanding of differences in styles, culture, and personalities and they were able to manage time zones and were conversant with the shared technology. The companies that had cultures that reflected emotional intelligence (self-perception, interpersonal skills, problem solving and stress management skills) also were able to discuss vulnerability (within themselves and within their organization), gaps, ethics, relationships, and manage stress in a way that allowed these perceived challenges to become clear opportunities to address the gaps and problem solve to create better solutions or a better process. Culture maps defined roles and processes. These were often understood processes for crisis in communication, and clear channels, including platforms of technology, for communication. Rather than seeing them as negative, stressful situations were seen by these companies as another avenue to implement a strategic roadmap underpinned by an embedded culture. Creativity, communication and problem solving felt “natural”. The culture created bonding, a sense of shared values and trust, which was crucial, especially in a remote work environment. Leadership and engagement existed at all levels. The CEO took a secondary seat, inviting contribution and collaboration, waiting for synergy to create itself. Each member was equipped with the knowledge of all remote platforms/interactive technology used or needed. Each participant had a sense of logistics and priorities that the team needed to address the scenario. The culture and values being held by each team member allowed all voices to be heard or to speak up, all voices to challenge and question and all voices and contributions valued. Each person’s personality, area of expertise, and how their part fit into the whole was acknowledged. This clarity of what the cultural identity of the company is and is not, allowed most successful companies a greater sense of confidence. Teams took time to understand the scenario, grasp its implications for the team rather than plunging forward. They were able to assess the challenge in the context of who they were and apply it to their vision and culture to ensure that their approach matched who they were. This internal culture map led to a clear external relationship map. The companies that had the most success with the scenario were able to look at the customer/client and see who they were and create a clearer bridge to addressing the client’s needs and expectations in a way that was consistent with their culture. They were also able to assess if the client/customer was a fit and were ok with losing a client if it meant it was not a fit for the culture. They had a clear understanding of the experience they wanted their customer to have at each step with them and wanted to understand the impact of their approach on that experience. That external relationship map made it easier and more efficient to find solutions. In fact, it provided a foundation that allowed for more creativity in solving the crisis. At each choice point, these two maps: the “internal culture map” and the “external relationship map” became the foundation for decisions that allowed flow, ease and confidence. It created space for creativity,communication and a deeper capacity to analyze and solve the situation at hand. The fuzzier the internal culture map the fuzzier tackling the rest of the scenario became. What that means is that a number of organizational behaviors were affected on a varying basis. These included: an understanding of instructions, the roles individuals played, how they tackled the problems, how the team approached solutions, how the team discussed values, ethics, and managed themselves and each other. In companies where the leader still primarily held the culture, the conversation revolved around pleasing or supporting the leader’s vision. Dissent, challenge, looking at gaps or co-creation were less visible. Valuing and group dynamics were more unidirectional rather than collaborative and were not directed equally as members of the team. Communication didn’t have a flow. It was more haphazard. It took longer. Ironically, it required the leader to take more time and space to have to manage rather than having it held with the group. Gaps were often overlooked or there was inclination toward false bravado, relying on what worked, rather than a serious look at what wasn’t working. Without a clear culture map, it was also more difficult to test the gaps against where the company’s stated vision and culture and values indicated they wanted to be. There wasn’t as much confidence that allowed each person to contribute. They weren’t as able to see the client/customer needs and bridge to them. The less emotional intelligence and maturity within the team, the less insight they had about the process itself and its impact on creating solutions. The scenario highlighted that the clearer a company’s internal culture map and the clearer their external relationship map, the clearer the roles of each member were. Synergy existed and leadership was held within the whole team. Processes existed and unfolded more efficiently. The clearer the external relationship map, the better the understanding of the customer/client and the more they were able to execute their strategy to meet the experience they wanted the customer/client to have. Companies that valued emotional intelligence expressed more confidence and better communication. There was a sense of belonging, valuing and trust within the team. The internal culture map and external relationship map bridged the gap of remote work to create better cohesion and symmetry within teams. These initial qualitative findings indicate the relationship between clear culture maps and internal and external relationships. Maps impact the ability to deliver strategy. They also create ease in remote work environments, especially in the tech industry. Further research would be required both qualitatively and quantitatively to further validate these findings. In conclusion, the implication for venture capital funds’ portfolio companies: startups that have clear internal and external culture maps embedded by top leadership can handle stress, crisis and the uncertainty. They are more efficient, productive, innovative and creative. References Bleimschein, Benedict Inc. July 27, 2021. “Use This Pyramid Framework to Effectively Manage HybridTeams.” Carr, David F. Venture Beat. October 19, 2021 “Gartner Prescribes a Human-centric, Hybrid-Focus for the Future of Work.” Collin, Mathilde. November 9, 2021. POVL . “The 4-Day Workweek is Not the Future of Work. The Future is Flexibility.” Deczynski, Rebecca. September 27,2021. MSN. “The Great Resignation is Going to Be a Shock –Hitting Some Industries Harder Than Others” Downs, Sophie. Inc. “PwC Survey: Employers Struggling to Keep Up With Changing Employee Expectations.” Fox, Erica Ariel. Forbes. October 18, 2021. “Work-Life Balance is Over – The Life-Work Revolution is Here” Glasser, Edward and Cutler, David. The Economist. September 24, 2021. “You May Get More Work Done at Home. But You’d Better Have Ideas at The Office.” Groff, Bree. Fast Company. October 3, 2021. “Leaders Are Thinking About Hybrid Work in a One Dimensional Way. There is a Better Approach. Groysberg, Boris, Lee, Jeremiah, Price, Jesse, and Cheng, J.yo-Jud. Harvard Business Review, January, 2018 “Corporate Culture.” Hobson, Nick. Inc. November 5, 2021. “The Obvious Psychological Truth Left out of Most Future of Work Conversations.” Nick Hobson. Inc. November 17, 2021. “Microsoft Research Reveals the Biggest Downside to Remote Work and Here’s How to Address it.” Moore, Dene. Globe and Mail. December 28, 2021. “Emotional Intelligence Trumps IQ in the Workplace and Women Have More of It.” Nova, Annie. CNBC. December 26, 2021. “How We Work From Home Needs to Change in the New Year.” Ramesh, A.R. Entrepreneur Magazine. July 6, 2021 “Reimagining the New Mandate For the Future Workforce” Riedl, Christoph, Malone, Thomas W. and Woolley, Anita W. Oct 21, 2021. MIT Sloan. “The Collective Intelligence of Remote Teams.” Rodgers, Gabrielle and Nicastro, Dom. Oct 20 2021. CMSWire. “5 Takeaways from the Fall 2021 Digital Workplace Experience Conference.” Rooney, Katharine. The Future. October 13, 2021 “These 5 Themes are Shaping the Future of Work.” Schwantes, Marcel. Inc. June 29, 2021 “Warren Buffet Thinks You Should Hire For Integrity First.” Stewart, Ben. Fast Company, July 29, 2021. “This Should be the Remote Workers’ “Bill of Rights.” Steen, Jeff. Inc. February 16, 2022. “High Profile Study Reveals Why Most Meetings are Ineffective. It Only TakesOne Simple Step to Fix It.” Stillman, Jessica. Inc. September 20, 2021. “New Microsoft Study of 60,000 Employees: Remote Work Threatens Long-Term Innovation” Stockpole, Beth. MIT Management. July 27, 2021 “Digital Transformation After the Pandemic.” Tucker, Matt. Entrepreneur. July 24, 2021. “How to Create an Asynchronous Work Culture.”  Weikle, Brandie. CBC Radio. Dec 20, 2021. “Forget 9-5. These Experts Say the Time Has Come for the Results-Only Work Environment.” Williams, Shannon. Dec 27, 2021. IT Brief New Zealand. “The Future of Work is About People, Not Tech.”
three men are posing for a picture in front of a toronto sign .
08 Sep, 2022
At Raiven, we are happy to share that we had a great summer. It is also time to roll up our sleeves and get back to work. We’ve been on the move: traveling, meeting founders, investors, and seeking the latest technologies. This post is a recap – of our Nordic trip, where we learned about the latest in food and AgTech, and also our participation in the Collision Conference in Toronto. We also have some new investments that we are excited to share with you. NORDIC STOP: PASSION FOR FOODTECH We were delighted to spend time with Raiven LP and Advisory Group member Björn Öste in Stockholm and Lausanne. It goes without saying that he is a visionary foodtech entrepreneur and co-founder of Oatly, which went public last year. We also participated in two major conferences: The Future of Food Summit and Stockholm TECH Live 2022 . Björn gave the keynote presentation at FoodHack in Lausanne. His insights were inspiring: He spoke of the journey of Oatly and gave the crowd tips on navigating the food sector, strengthening the resolve of many founders on their entrepreneurial journey. Founders remarked that Björn’s talk gave muchto think about. He offered ideas for creating new products even when no market exists, noting that an IPO or acquisition may be tough to imagine for founders in the food space.
a woman wearing a mask is sitting in front of a laptop computer .
16 Aug, 2022
RAIVEN LEADS RESEARCH ON THE FUTURE OF WORK
Portrait of Björn Öste. Photo credit: Kathryn Costello
04 Jun, 2022
OATLY CO-FOUNDER INVESTS IN RAIVEN CAPITAL
a black and white photo of a bridge over a body of water
03 Jun, 2022
Technology innovation knows no bounds. Neither do we.
Share by: