ARTIFICIAL INTELLIGENCE, PROMISE OR PERIL: PART 2 – REGULATING AI

Dr. James Baty • Oct 05, 2023

by Dr. James Baty PhD, Operating Partner & EIR and Tarek El-Sawy PhD MD, Venture Partner


RAIVEN CAPITAL


This is our second release in our series on governing
Artificial Intelligence – 
AI Promise or Peril


This release consists of two sections 


AI Regulatory Challenges & Models

AI Regulation Examples & Issues


The third release will cover AI from a VC perspective. The first release on AI Ethics is available here.


In the inaugural episode of our ‘AI, Promise or Peril’ series, we delved into the clamor surrounding Artificial Intelligence (AI) ethics—a field as polarizing as it is fascinating. Remember the Future of Life Institute’s six-month moratorium plea, backed by AI luminaries? Opinions ranged from apocalyptic warnings to messianic proclamations to cries of sheer hype.


We observed that the answer to AI governance isn’t a one-size-fits-all solution; rather, it’s a cocktail of corporate self-governance, industry standards, market forces, and international legislation—sometimes with AI policing itself. Our journey began with the first episode dissecting the growing landscape of AI ethics frameworks, concluding that the quest for the perfect blend of public, private, and governmental guidelines is only just beginning.


In today’s installment, let’s attempt an overview of formal AI regulation, including challenges in regulating AI, primary regulatory models, key regulation implementation, and outstanding issues. 


No single guide to AI governance can cover everything. Our goal is a meta-guide that highlights the key issues, actors and approaches, with an idea to be informed enough to evaluate how AI impacts the VC ecosystem.

Image of woman and robot

AI REGULATORY CHALLENGES AND MODELS


Why AI Regulation is a Must


Regulating tech isn’t new; we’ve done it for radio, TV, cars, planes, medical devices, and the Internet. These regulations address safety, fairness, and privacy, often leading to new laws or even regulatory bodies.


But AI is also a different beast. It not only introduces fresh risks, but also amplifies existing ones. Consider self-driving cars: should they not be regulated for safety like traditional cars? What if they’re also collecting and selling your travel data, leaking your financial information, or denying you access based on biased facial recognition errors? Who’s responsible if your self-driving autonomous vehicle kills someone? Which ‘agent’ does the regulation focus on for liability? Manufacturer, AI navigation software developer, GPS service, passenger/owner, everyone?


Notably, AI isn’t always easily scrutinized; it’s more so embedded in code rather than distinct physical objects. This makes oversight more challenging, yet urgently needed. Case in point: research by Joy Buolamwini and Timnit Gebru showed commercial gender classification systems had error rates of up to 34.7% when identifying darker-skinned females. The frightening outcome – such errors have already led to wrongful arrests. And, oh yeah, Amazon’s facial recognition tech falsely identified 28 black members of the US Congress as criminals. No surprise that the EU AI Act now classifies public facial recognition as ‘unacceptable risk’ – prohibited.


Regulating AI is complex and has wide-ranging implications, from policing, to medicine, to military applications. Two key issues stand out in the imperative: AI brings unprecedented challenges and risks. Similar to regulation of social media, we’re already behind the curve.


While one article can’t cover all the intricacies of AI regulation, understanding the big picture is crucial for business and investment decision-making.



Challenges in Regulating AI


“It takes all the running you can do, to keep in the same place.” — The Red Queen to Alice


AI isn’t just another tech innovation; it’s a whirlwind of unique challenges. Before diving into AI regulation models, let’s highlight the issues making AI a tough nut to crack.


While tech’s pace has always been brisk, AI is exploding. We’ve seen AI underpin tools like Google Search for ages. Yet, within six months of release in 2022, OpenAI’s Chat GPT spurred an all-out arms race: Microsoft threw $13 billion into OpenAI and integrated ChatGPT into Bing and Edge, Google unveiled Bard with DeepMind LaMDA tech, and Facebook shifted gears from Meta to launch open-source LLM tools. Amidst this, Marc Andreesen laments this may be the end of traditional ‘Internet Search’.


The well-known ’Red Queen Problem’ – needing to constantly innovate to keep up — of extreme dynamics makes AI regulation especially challenging, due to several factors: 


  1. Rapid Changes: AI evolves so fast that regulations risk becoming obsolete as soon as they’re enacted.
  2. Data Drift: Fluctuating data patterns can unpredictably alter AI systems, muddying fairness or accuracy enforcement.
  3. Market Dynamics: The race for AI dominance could make companies overlook regulatory speed limits, risking non-compliance or loss of edge.
  4. Security Mutation: Evolving cyber threats necessitate continually updated security standards.
  5. Resource Utilization: Rapid advancements could make previous resource guidelines irrelevant,
  6. Ethical Fluidity: Social norms shift, meaning regulations on AI ethics, like bias and discrimination, need constant recalibration.



Though many tech innovations face such hurdles, AI’s dynamic nature is extreme. But remember, this speed of change is only challenge number one in the long list of AI regulatory puzzles. Consider these other significant issues that complicate regulation:

Disruptive Business Models: Emerging technologies like AI blur traditional business and regulatory boundaries, potentially rendering existing frameworks outdated or irrelevant.


Complexity and Interdisciplinary Nature: Navigating the labyrinthine intricacies of AI demands not only specialized technical know-how but also a multifaceted understanding of ethical, social, and economic ramifications.

Jurisdiction and Governance: The borderless nature of tech giants collides with fragmented oversight, complicating governance across sectors like healthcare, finance, and transportation.


Ethical and Social Challenges: Balancing unbiased AI, privacy preservation, and safety protocols in burgeoning technologies is akin to walking a high wire.


Economic Impacts: The tightrope act between regulation and innovation introduces risks—from stifling global competitiveness to perpetuating access inequities in marginalized communities.


Unintended Consequences: The specter of regulatory capture and the paradox of too much or too little oversight pose dual threats to innovation and ethical conduct.


This is not an exhaustive list, but clearly AI regulation poses not just technical risks, but also unique regulatory challenges. These challenges demand a nimble, multi-disciplinary approach that can adapt as technology evolves. Regulatory sandboxes, public-private partnerships, and multi-stakeholder governance models are among the strategies being explored to regulate AI more effectively. Let’s look at the high-level regulatory models available for this daunting task.



Different Models of AI Regulation


Artificial Intelligence regulation is a subject of increasing interest and urgency as AI technologies become more pervasive and impactful. Several models of regulation emerge that aim to ensure the safe and ethical deployment of AI. Here’s a look at the applicability of three commonly discussed regulatory models: the Risk Model, the Process Model, and the Outcomes Model. Each may be more appropriate for different types of AI Applications, market settings, or regulatory goals. They are often combined.


RISK MODEL

The Risk Model of AI regulation focuses on categorizing and assessing the potential risks and dangers associated with specific types of AI technologies or applications. Regulation is tailored according to the level of risk, with higher-risk technologies receiving more stringent oversight.


  • Example Application – Autonomous Vehicles: Regulators could classify self-driving cars as high-risk due to potential accidents or injuries. Before reaching public roads, these vehicles might need to pass diverse safety tests under various driving conditions. Additionally, manufacturers could be mandated to maintain significant insurance policies for potential liabilities.


  • Key Features:
  • Risk Assessment: Prioritize AI technologies based on potential harm to individuals and society.
  • Tailored Regulation: Apply differing levels of scrutiny, approval, and monitoring based on risk assessment.
  • Compliance Checks: Regular audits or evaluations to ensure risk mitigation.




PROCESS MODEL

The Process Model focuses on the development, deployment, and operational stages of AI systems. Instead of primarily targeting the technology itself, this model aims to regulate the methods and processes by which AI is created and used.


  • Example Application: Facial Recognition: If a company intends to deploy facial recognition in airports, they’d need to align with specific regulatory standards. This would involve using unbiased training data, ensuring algorithm transparency, and instituting human oversight mechanisms. Regular compliance audits might be necessary to verify adherence.


  • Key Features:
  • Development Guidelines: Setting standards for data collection, training, and algorithm design.
  • Transparency: Requirements for disclosing algorithms, data sources, and decision-making processes.
  • Operational Protocols: Rules for how AI systems are deployed, monitored, and maintained.




OUTCOMES MODEL

The Outcomes Model focuses on the societal and individual impacts of AI, rather than on the technology or the process by which it was developed. This model aims to enforce accountability based on the actual outcomes or consequences of using AI.


  • Example Application – AI in Healthcare Diagnostics: For AI systems diagnosing diseases via medical imaging, the emphasis might be on post-deployment outcomes. Regulators would scrutinize the diagnostic accuracy, false positive/negative rates, and any detectable biases. Inconsistencies or significant errors could lead to regulatory penalties or restrictions.


  • Key Features:
  • Impact Assessment: Evaluation of AI’s societal and ethical implications, post-deployment.
  • Accountability: Assigning responsibility for negative outcomes and enforcing penalties.
  • Adaptive Regulation: Regulation evolves based on observed impacts, with feedback loops for continuous improvement.



Comparing the Risk, Process, & Outcomes models


In these examples, the key takeaway is how the focus of regulation differs:


Risk Model:
  • Focus: Focuses on potential hazards before they occur, often requiring preemptive safety measures


  • Execution: Strong in pre-market evaluation but can be rigid with predefined risk categories; accountability largely rests with regulators.


Process Model:
  • Focus: Focuses on the methods and processes involved in the development and deployment of the technology, often requiring documentation and transparency.


  • Execution: Operates both pre- and post-market, adapting to best practices; emphasizes corporate adherence to these practices.


Outcomes Model:
  • Focus: Focuses on what actually happens once the technology is deployed, holding parties accountable for negative impacts and possibly requiring corrective action.


  • Execution: Typically, post-market, driven by real-world data, and holds entities responsible for those outcomes.


Jurisdictions often mix and match these models based on their unique regulatory needs and philosophies. While each model has pros and cons, hybrids are common to capitalize on multiple strengths. Case in point: the EU AI Act is famed for its Risk Model foundation but also integrates elements from the Process and Outcomes models.


AI REGULATORY EXAMPLES AND ISSUES

Image of C3PO, MRI machine with patient, and doctor

Why AI Regulation is a Must


Standardized AI regulation, though advancing, remains a complex terrain. It is clear the inherent risks merit global action. For a sense of the complexity, check out the OECD repository summarizing over 800 AI policy initiatives from 69 countries. The newly-passed EU AI Act and ongoing UN talks are hopeful signs of emerging alignment. Yet, global AI regulatory philosophies aren’t in sync, as Matthias Spielkamp of Algorithm Watch characterized the key players “The EU is highly precautionary” “The United States… has so far been the most hands-off,” and China “tries to balance innovation with retaining its tight control over corporations and free speech…And everyone is trying to work out to what degree regulation is needed specifically for AI.” 


Given the transborder issues of business complexity and technical risks, both innovation and regulation would likely benefit from more standardization. Like regulation, this is likely to be a hybrid strategy. Daniel J. Gervais has proposed a model for global agreement, an international framework of ethical AI programming obligations on companies, programmers and users that are then translated locally into compatible regulation. The focus of this alignment would be on three organizations: large bodies of regional collaboration such as the US, EU and OECD (the fastest route to alignment); the WTO and lastly, the UN, for longer term agreements.


Social media’s global reach has made it abundantly clear: tech regulation is a global game—risks don’t respect borders. Now, companies once tempted to jurisdiction shop are finding that unified rules might be less of a headache. And the experts are realizing the risk of AI magnifying technology risks doesn’t stop at the border. Imagine scams like the ‘Business Email Compromise’, or the ol’ ‘Grandparent Scam’, supercharged by deep-fake AI calling from overseas: “Mom! Quick run to the Apple Store and buy some gift cards to get me out of this foreign jail!!” It’s clear: global AI threats need global solutions.


So, let’s consider the key emerging examples of AI regulation – the broad EU AI Act approach, the US Framework and Agency approach, and some major issues.



The EU AI Act – a Risk Based / Omnibus Approach


Similar in approach to Europe’s General Data Protection Regulation (GDPR), the toughest privacy and security law in the world, the region has taken a significant step by passing the EU Artificial Intelligence Act (AIA). The first comprehensive omnibus regulatory framework, and as such (like GDPR), is a global template for AI regulation. Both the GDPR and the AI Act have their legal basis in Article 16 of the Treaty on the Functioning of the European Union (TFEU), which allows EU institutions to make rules about protecting personal data – although the AIA goes much farther than just personal data. On June 23rd, the European Parliament approved the proposed act, which is now undergoing implementation negotiations with the EU Council and European Parliament. 

The Act proposes a risk-based governance scheme creating new requirements across a broad range of entities and jurisdictions. This act categorizes AI into four groups: low-risk, minimal-risk, high-risk, and prohibited. High-risk AI applications are those that pose a substantial threat to people’s health, safety, fundamental rights, or the environment. The act also introduces transparency requirements for AI systems. For instance, generative language models like ChatGPT would need to disclose their AI-generated content, differentiate deepfake images from real ones, and incorporate safeguards against the creation of illegal content.



The EU AI Act – The Risk Model Detailed


The regulatory framework defines four levels of risk in AI:

a table showing examples of allowed in the eu

The Act goes hard on ‘Unacceptable Risk’ AI, banning systems that pose clear threats to safety and rights:

  • “Dark-pattern AI” that exploits human vulnerabilities for harm.
  • “Social scoring” used by authorities to rank trustworthiness.
  • Real-time facial recognition in public spaces by law enforcement.


High risk systems are highly regulated and come in two categories

  1. Embedded AI in already-regulated products (see Annex II).
  2. Stand-alone systems in critical areas like robotic surgery (Annex III).


These systems will be subject to strict obligations before they can be put on the market, e.g., risk mitigation, ensured high quality data, logging of activity, and appropriate human oversight. This enforces the principles of traceability and accountability we talked about in the ethics article. 


It is worth noting that many of the applications described under the Act, such as high-risk applications of AI in aviation, cars, boats, elevators, medical devices, industrial machinery, etc., are already subject to existing agency regulations — such as the EU Aviation Safety Agency.



The EU AI Act – Impact Beyond the EU


The Act doesn’t skimp on penalties. Get caught in a breach of the prohibited AI practices or failure to put in place a compliant data governance program for high-risk AI systems? You’re looking at fines up to €30 million or 6% of your global annual revenue, whichever stings more. That’s notably heftier than GDPR’s max hit of €20 million or 4%. And heads up, global businesses: just like GDPR, with its extraterritorial implications and Adequacy Decision, the AI Act’s reach isn’t limited to the EU. If you’re an outside company aiming to do business in the EU, get ready to play by their AI rules. Expect this Act to shake things up worldwide. 



The US – Slow to Pass Omnibus Regulatory Charters


Looking to craft a U.S. AI regulatory masterplan? Take a leaf out of the ill-fated American Data Privacy and Protection Act. Aimed to be the U.S. twin to the EU’s GDPR, it cruised through committee, but got ghosted by the 2022 Congress. The act would’ve had algorithm peddlers run checks before launching inter-stat, plus, annual deep-dives for the big data sharks. Spoiler: comprehensive data privacy or AI regs in the U.S. are still in the vaporware stage.


Meanwhile, states don’t wait for the Feds to catch up. California hit fast-forward with its Consumer Privacy Act post-GDPR. And now in the AI arena, you’ve got states from Cali to Texas to New York cooking up their own rulebooks. San Francisco? They went all in, giving facial recognition tech the boot for local government. Still there is hope for a consistent US-wide AI regulatory framework, before a hodgepodge of state and city laws gets too fractured to follow.



The US approach – Strategy More Focused on Process Models


In the U.S., efforts are in high gear on several fronts to develop a broad regulatory framework for AI, similar to the work done in the EU. At a strategic policy level, the National Security Commission on Artificial Intelligence was established in 2018 to examine AI, machine learning, and related technologies in national security and defense. There is also encouraging work between the US and the EU to align the elements of their respective AI regulatory frameworks. 


Two key policy documents have emerged in the US AI regulation strategy. In 2022, the White House’s Office of Science and Technology Policy issued the Blueprint for an AI Bill of Rights, outlining core principles for the responsible AI design and deployment. The show-stealer is NIST’s AI Risk Management Framework from 2023. Notably, NIST’s framework mirrors the EU AI Act’s risk-centric stance, but is distinguished by its life-cycle methodology to identify assess and monitor emerging AI risks. The NIST Model extends the 2022 OECD Framework for the Classification of AI systems, with modifications elaborating critical processes of test, evaluation, verification, and validation (TEVV) throughout an AI lifecycle.


The AI RMF Core is composed of four functions: GOVERN, MAP, MEASURE, and MANAGE. Each function is broken down into categories and sub-categories each with specific actions and outcomes. While it may be a while before the use of the NIST model is required by regulations to be applied to general commerce, this model already being adapted in US government procurement policies related to AI technology.




The US Approach – Regulatory Agency-Centric Implementation


But the lack of a federally mandated omnibus AI regulatory framework, doesn’t mean the US Federal government isn’t already regulating AI. significant AI regulations are being enacted by various US government agencies. While waiting for comprehensive AI regulation, agencies such as the Federal Trade Commission (FTC) and the Food and Drug Administration (FDA), the Department of Commerce (DOC), Equal Employment Opportunity Commission (EEOC) , and the Government Accountability Office (GAO) have taken specific steps to regulate AI. The FTC, for instance, has announced it is focusing on preventing false or unsubstantiated claims about AI-powered products. And in April 2023, the FTC, the Civil Rights Division of the US Department of Justice, the Consumer Financial Protection Bureau, and the US Equal Employment Opportunity Commission issued a joint statement committing to fairness, equality, and justice in emerging automated systems, including those marketed as “artificial intelligence” or “AI”.


The FDA has also been involved in regulating AI in the medical field. In 2019, they published a discussion paper on Artificial Intelligence/Machine Learning (AI/ML)-based software as a medical device, followed by public meetings and workshops on specific medical uses, patient trust, and device safety and effectiveness. The FDA issued a finalized AI/ML action plan in January 2021, and as of October 2022, FDA has authorized over 500 AI/ML-enabled medical devices via the 510(k), De Novo, and Premarket Approval (PMA) procedural pathways.




US – FDA / Medical AI Deep Dive


Let’s dial back on the strategic excitement, and focus on a specific sector: medicine. In April 2023, the FDA set forth a strategic direction for the rapidly evolving landscape of AI and Machine Learning (AI/ML) within healthcare. The Agency released draft guidance proposing an approach to ensure the safe and rapid modification of AI and machine learning-enabled devices in response to new data. This guidance underscores a balanced approach, facilitating continuous improvements in machine learning-enabled device software functions, while prioritizing patient safety and effectiveness.


Anyone who has been tracking the trajectory of AI/ML in drug development is well-aware of its multifaceted applications – they are as expansive as they are groundbreaking. However, with great innovation comes the critical need for clear regulation. The FDA aims to build a robust knowledge base and develop a clear understanding of the opportunities and challenges associated with employing AI/ML in drug development. Learning while regulating. 


This document highlighted three critical areas:


Human-led Governance, Accountability, and Transparency: This area underscores the critical role of human supervision throughout AI/ML’s development and usage phases. It stresses the need for consistent adherence to legal and ethical standards. Governance and accountability are paramount throughout every stage of the AI/ML lifecycle, with a strategic focus on identifying and addressing potential risks. While the specific details regarding transparency, its challenges, advantages, and best practices for human participation in drug development are still under definition, continued discussions and case studies are anticipated to provide clarity.


Quality, Reliability, and Representativeness of Data: The intricacy of training and validating AI/ML models calls for thorough attention to factors like biases, data integrity, privacy, provenance, relevance, replicability, and representativeness. This is a pivotal issue in all AI/ML models, and is especially particularly important when used in highly sensitive areas such as healthcare and drug development.


Model Development, Performance, Monitoring, and Validation: This section highlights the critical nature of comprehensive documentation, maintaining a clear data chain of custody, and following established steps for model assessment. Continuous monitoring and thorough documentation are underscored as vital elements in guaranteeing the AI/ML models’ reliability and consistency over time. Real-world case studies and feedback are identified as key resources for refining model oversight and continuously validating outputs.


Although embryonic, the FDA’s proposed framework illuminates the agency’s approach towards integrating AI/ML in drug development. A fascinating revelation emerges – the same rigorous methodology the FDA has developed and refined since being founded in 1906 for drug evaluation and approval, may very well be a model for AL/ML evaluation, approval, and oversight in drug development and other high-risk applications outside of medicine and healthcare, worldwide.


To put it succinctly, the rigorous standards the FDA employs for drug evaluation could very well set the benchmark for assessing the broader applications of AI/ML, especially in high-risk sectors. A case in point? The intricate web of informed consent in healthcare. Questions like, “How does AI factor into my treatment?”, “How will medical AI leverage my data?”, and “What’s the level of transparency in training this medical AI tool?” are not just specific to healthcare but resonate deeply with AI’s broader applications. These inquiries, while intricate, are fundamental in ensuring that all AI integrates seamlessly, transparently, and ethically into our lives.




Three Possible Future US Scenarios

A New AI Regulatory Agency?


In the discussion surrounding the passage of the EU AI Act and the stratospheric AI ‘hype curve’, there emerged a call for a new single US AI Regulatory Agency. Echoing the full-scale war between the chat vs search behemoths is a skirmish on creating a new dedicated agency. On one side Sam Altman has called for a new AI regulatory Agency (accompanying hiss call for more regulation). This position was echoed by Microsoft. On the other side of the battlefield is Google who is happy with the existing structure and more self-regulation. 


A critique of these positions would suggest they are each positioning where they think they can exert the greatest influence or regulatory capture. On the other hand, it is also true that the tech industry is smarting from the social media regulatory chaos and some may well welcome a one-stop-shop over having to deal with a dozen agencies. Suffice to say if it is hard enough to pass an omnibus regulatory act in the US, it is probably harder and unlikely that there is the legislative support for a completely new agency, but it remains a theoretical possibility. 




Presidential Executive Order – A Strategic Compromise?


While it could take some time before the US adopts a regulatory omnibus similar to the EU, there are issues that suggest quicker action is important and attractive. And one way to get this would be through a presidential executive order. It couldn’t force all business under the tent, but it could compel all the government agencies that regulate some aspect of AI to align the processes AND it could force any government contracts to follow that rule. This is fairly compelling that many of the AI mega-corps are big government contractors.

What makes this attractive?


  • With the flurry around the AI letter and the wave of AI startups there is a highly visible public spotlight on AI safety.
  • If the US doesn’t act at a national level then there will be more potentially conflicting state and local regulations which could hinder critical AI innovation.


One way of providing broad based national guidance is a proposal that the President could implement the bulk of the Blueprint for an AI Bill of Rights and the NIST AI Risk Management Framework through executive order.


This would provide immediate ‘regulation’ in four ways.


  • First, it could require all government agencies developing, using, or deploying AI systems that affect people’s lives and livelihoods to ensure that these systems comply with best practices.
  • Second, it could instruct any federal agency procuring an AI system that has the potential to “meaningfully impact [our] rights, opportunities, or access to critical resources or services” to require that the system comply with these practices and that vendors provide evidence of this compliance.
  • Third, the executive order could demand that anyone taking federal dollars (including state and local entities) ensure that the AI systems they use comply with these practices.
  • Finally, this executive order could direct agencies with regulatory authority to update and expand their rulemaking to processes within their jurisdiction that include AI. (This includes, for example, the regulation of medical AI by the FDA.)




The Brookings Critical Algorithmic Systems Classification (CASC)


Short of creating a new agency, the Brookings Institution has recommended giving existing regulatory agencies two new powers. To address this challenge, this paper proposes granting two new authorities for key regulatory agencies: administrative subpoena authority for algorithmic investigations, and rulemaking authority for especially impactful algorithms within federal agencies’ existing regulatory purview. This approach requires the creation of a new regulatory instrument, introduced here as the Critical Algorithmic Systems Classification, or CASC. The CASC enables a comprehensive approach to developing application-specific rules for algorithmic systems and, in doing so, maintains longstanding consumer and civil rights protections without necessitating a parallel oversight regime for algorithmic systems.” It’s a model that focuses on algorithm regulation and combines a risk model focus with process improvements. 




Future Scenarios in Sum


While it seems unlikely that a new agency would be created just to deal with AI, especially given that so many of the applications fall explicitly within the purview and charter of existing agencies, it might be possible that a special regulatory body would be created for high risk applications or those that are currently prohibited under the EU AI Act. It certainly does seem possible that a US Executive Order would be useful in aligning government agencies around the NIST Framework. Yes some process improvements, such as subpoena authority for algorithmic investigations might become attractive and regulatory agencies try to fathom the complexities of AI models and applications. Like the Red Queen’s advice – you have to be prepared to keep moving to keep up.




AI Regulation is Also About Fostering Innovation – UK +


Regulating AI isn’t just about restricting high risk uses, or ensuring that the regulations don’t limit innovation, but also, giving the importance of technical innovation to country’s economies, an emerging goal of AI ‘regulation’ is to ensure that it actively fosters and encourages innovation. For example, in 2021, the National Artificial Intelligence Initiative was launched to ensure US leadership in the responsible development and deployment of trustworthy AI . More recently the UK has embarked on an AI regulation model that emphasizes its national competitive position in AI technology (in comparison with the EU), adopting UK’s National AI Strategy, in line with the principles set out in the UK Plan for Digital Regulation. The stated strategy is that the framework will be “pro-innovation, proportionate, trustworthy, adaptable, clear and collaborative”, suggesting in some ways the UK would be an AI regulatory island apart from the EU. However, UK AI businesses would of course have to comply with the implementations of the EU Act to do business in the EU. 


This focus on competitive innovation is echoed in some way in almost all of the national strategy elements of AI regulatory proposals.

  • The EU AI ACT includes specific measures to support innovation, such as AI regulatory sandboxes to support Small and Medium-Sized Enterprises (‘SMEs’) and start-ups. There are attempts in the Act to create a balance in the burden across the AI supply chain, e.g., placing the larger burden on AI product manufacturers and less on the intermediate and end users. ??
  • In 2023, the US The National Science Foundation published a paper: Strengthening and Democratizing the U.S. Artificial Intelligence Innovation Ecosystem: An Implementation Plan for a National Artificial Intelligence Research Resource, which provides computational, data, testbed, and software resources to AI researchers. 


We want to be protected from AI abuses and guaranteed trustworthiness, but we also want the benefits of AI to not be blocked by regulation. 



Some Outstanding Issues

Regulation of GPAI – the Who to Regulate Problem


The ACT includes specific measures to support innovation, such as AI regulatory sandboxes to support Small and Medium-Sized Enterprises (SMEs) and start-ups. There are attempts in the Act to create a balance in the burden across the AI supply chain, placing the larger burden on AI product manufacturers and less on the intermediate and end users. But there remains significant controversy on the regulatory focus of the Act on general purpose AI (GPAI) tools and Open-Source applications.


The purveyors of General Purpose AI, for example, ChatGPT argue that it by itself isn’t a high-risk application and so should be exempt from regulation. And this is the case that OpenAI made to the EU in their September 2022 whitepaper, titled OpenAI Whitepaper on the European Union’s Artificial Intelligence Act. The ‘opposition’ argues that this leaves it too easy to use ChatGPT or other GPAI tools in high-risk applications. Who then is responsible? In the EU act philosophy, it would be the manufacturer of the technology (who is best placed to regulate), not the end user (like the driver of a car is not responsible for the safety of its design). As in the answer to so many of the issues, a hybrid strategy likely emerges. One that combines both the risk, process, and outcomes models, and the focus on manufacturers vs. users.



Regulation of Open Source AI – the Racist “Franken-Tay” Problem


Everybody remembers Tay, Microsoft’s Twitter bot, that went quickly from friendly bot to racist rogue and had to be shutdown 16 hours from launch. Current generative AI, like ChatGPT, implements algorithmic ethical safeguards and filters to limit these issues in training, and prevent expressing these behaviors in operation. What did the Internet do? It responded with DAN ‘Do Anything Now,’ a technique to use prompt engineering to ‘jailbreak’ the safeguards built into ChatGPT. Then OpenAI continually refines its safeguards. This “arms-race” is manageable in controlled environments, but what about open-source AI? 


Risk Model-based AI regulation focuses on ‘big tech’ entities like Microsoft or Google for oversight. But how do we police careless or rogue open-source developers, particularly on the dark web, from masterminding a Franken-Tay that has no guardrails? Open-source AI is a challenge. Regulatory responses include sandboxes for experimental projects, variable definitions of the ‘agent’ being regulated, and even open-source databases of algorithmic risks to promote better open-source. Still, his remains one of the big challenges in AI regulation.




The ‘Alignment’ Problem


Isaac Asimov proposed three laws to insure that ‘robots’ don’t harm humans. But how? There is a risk of ‘misspecification’ where you tell the algorithm to do, or not do ‘X’, but then it cleverly subverts the intended rules in unforeseen ways; this is called emergence. The field of AI alignment, inspired in part by Nick Bostrom’s speculations on AI controllability, seeks to meticulously synchronize AI behaviors with human intentions, serving as a pivotal domain in AI governance. 


The is the core technical battlescape in algorithmic AI governance. The research delves into advanced technical methodologies like Inverse Reinforcement Learning (IRL), Supervised Learning from Human Feedback (SLHF), and Adversarial Training, aiming to mold AI’s actions to align closely with human goals. And virtually all of the AI giants have their programs to actualize AI alignment; for instance, DeepMind is pioneering ‘Safe Reinforcement Learning’, and OpenAI is advancing ‘Clarification and Reward Modeling’ as part of their ‘superalignment’ project.


This research importantly addresses both the issues of AI ethics and AI regulation. However, it does face criticism, ranging from its association with ‘longtermists’, to the inherent technical challenges in instilling alignment in systems like LLMs, MLLMs and other Machine Learning models, where the technical architecture often compromises transparency and accountability.

AI alignment is integral to the broader strategy of AI governance, but it isn’t a standalone solution. It’s a technical complement to strategies like self-governance and risk-model frameworks, hopefully yielding a harmonious governance ecosystem. The key technical challenge is how to integrate alignment and transparency.




Summary


Alright, let’s cut through the fog: Feeling a smidge safer? Yes, but clarity’s still a luxury. We’re doing better than our previous challenges of facing internet / social media exploitation and risks. Thanks to the power players—AI titans, policy wonks, and ivory-tower scholars, rolling up their sleeves and crafting the AI rulebook, we’re steering away from the dark abyss of unchecked digital chaos.

 

However, there remain tactical glitches in the matrix: The EU’s act awaits ‘implementation’, is the risk-based model cutting it? Jury’s out. Transborder tech drama? The US… and incomplete beginning. This all necessitates a whole next level of harmonization.


And there’s still the strategic Red Queen dilemma. With the AI sphere and its numerous applications evolving rapidly, three pivotal concerns arise:


  • Can the rule-makers even keep pace with this relentless tech evolution?
  • How adeptly can developers navigate this shifting regulatory terrain?
  • And the million-dollar question for all players—what’s my responsibility?


It’s encouraging there’s no single echo chamber amongst Big Tech – for example, the new single-agency concept is supported by some and opposed by others. Some cheer on new regs, while others play defense. This diversity can help, but staying vigilant is even more so a necessity.


What about the Terminator? Most of the chatter’s about benign, commercial bots. But let’s not kid ourselves: some tech is weaponized, ready for cross-border mind games and kill missions. The big military players flex both offensive and defensive muscles. But those second-tier nations? Vulnerable. And a few could be existential bad actors. Notably, Pippa Malmgren, former US and UK cabinet advisor, argues that we’re already entrenched in a technological World War III.


To highlight where we stand consider… US Senate Majority Leader Chuck Schumer convened a private session on AI regulation with 60 senators and the key tech heavyweights. ‘X’ Chairman Elon Musk emerged saying there was “overwhelming consensus” for regulation on AI… And Lawmakers said “there was universal agreement about the need for government regulation of AI, but it was unclear how long it might take and how it would look”. ‍


So, whether you’re coding, legislating, or just consuming AI, get focused. Assign a watchdog for this circus because you don’t want to be the last to know.


Stay tuned for our part three of this series, where we examine the impact of AI Ethics and Regulation on Business & Investment Strategy.


an aerial view of a picture frame in the middle of a park, Dubai
By Raiven Capital 05 Feb, 2024
Raiven Capital recently announced the launch in Dubai of its Dubai International Financial Center (DIFC) based $125M USD tech venture fund. We are proud of this accomplishment, and can’t wait to have new investors become part of our fund.
By Dr. James Baty, Supreet Manchanda and Paul Dugsin 15 Dec, 2023
by Dr. James Baty PhD, Operating Partner & EIR and and Supreet Manchanda & Paul Dugsin, Founding Partners RAIVEN CAPITAL This is our third release in our series on governing Artificial Intelligence – AI Promise or Peril This release consists of three sections AI Impact on Business & Markets AI Impact on VC Operations Impact of AI Governance in VC The first release on AI Ethics is available here . The second release on AI Regulation is available here . In our inaugural episode of our ‘AI, Promise or Peril’ series, we delved into the clamor surrounding Artificial Intelligence (AI) Ethics—a field as polarizing as it is fascinating. Remember the Future of Life Institute’s six-month moratorium plea, backed by AI luminaries? Opinions ranged from apocalyptic warnings to messianic proclamations to cries of sheer hype. In our second episode, we examined the chaos around the emerging AI Regulation, a cacophony of city, state, national, and international regulatory panels, pronouncements, and significant legislative and commission enactments. We examined the EU AI Act, and the US NIST AI Risk Framework amongst key models. We suggested there was a strong case for a US Executive Order on AI based on the NIST AI-RMF. Keep in mind that the US Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence was issued by President Biden on October 30th 2023. In addition, the United Nations announced in October the creation of a 39-member advisory body to address issues in the international governance of artificial intelligence. Our blog posts observed that the emerging solution to AI governance is not one act or law; it encompasses corporate self-governance, industry standards, market forces, national and international regulations, and especially AI systems regulating other AI systems. AI governance impacts not just those developing AI projects, but also those leveraging existing AI tools. Navigating this terrain involves a complex interplay of legal regulations and voluntary ethical standards. Whether you’re at the helm of an AI project or leveraging AI tools developed by others, a complex web of ethical and regulatory issues awaits.
a robotic hand is reaching out to a human hand .
By Dr. James Baty 18 Jul, 2023
By Dr. James Baty Advisor, Raiven Capital The headlines around AI are screaming for attention: Launching yet another AI company, Elon announced on his Twitter Spaces that he had warned Chinese leaders that digital superintelligence could unseat their government!, Other headlines herald the coming super-positive impacts on world economies: Goldman Sachs noted that generative AI could boost GDP globally by 7 percent. Amid calls for more regulation, the debate surrounding artificial intelligence has taken a multifaceted turn, blending apprehensions with aspirations. The fears and uncertainties often act as catalysts for attention and advancement in AI. The technological prowess, the risks, the allure – it’s all a heady brew. While some workers clutch their paychecks fearing obsolescence, shrewd employers rub their hands together asking, “Can AI trim my overhead?” In this three-part Dry Powder series, I will deconstruct the issues around AI governance: ethical frameworks, emerging governmental regulation and the impact AI governance is having in venture capital funds. As a technologist, my career designing and advising on large-scale tech architecture strategy has leveraged and suffered the previous two of Kai-Fu Lee’s ‘Waves of AI’. Clearly this third wave is big. Setting the Stage: Ethical Principles of Artificial Intelligence The question of AI safety and regulation has sparked heated discussion globally, especially as AI adoption spreads like wildfire. Despite calls for more regulation, the Washington Post reported that Twitch, Microsoft and Twitter are now laying off their ethics teams , adding to the industry’s increasing dismissal of those leading the work on AI governance. This should give us pause: what are the fundamental ethical principles of AI? Why are some tech executives and others spending millions to warn the public about it? Should it be regulated? Part of the answer is that fear sells. In part, AI is already regulated, but more of it is on the way. First, Let’s Discuss “The Letter” Enter the March storm: Pause Giant AI Experiments: An Open Letter . Crafted by the Future of Life Institute , and signed by many of the AI ‘hero-founders,’ (who warn us about AI, while they aggressively are developing it), this letter thundered through the scientific and AI community. There were calls for a six-month halt to AI research, while the red flag of existential threats was raised. The buzz generated by the letter was notable. But, hold the phone! Forgeries appeared among the signatures, echoing ChatGPT’s infamous “ hallucinations .” Moreover, some of the actual signatories backtracked. Critically, many experts in AI research, the scientific community, and public policy underscored that the letter employed hype to peddle technology. Case in point, Emily M. Bender, a renowned neurolinguistics professor and co-author of the first paper cited in the letter, expressed her discontent. She called out the letter for its over-the-top drama and misuse of her research, coining it as “dripping with #AIhype.” Bender’s comments are suggestive of a cyclical pattern in technology adoption, where fear and hype are instrumental drivers of decision-making. As technology historian David Noble documented, the adoption of workplace and factory floor automation that swept the 1970s and ‘80s was driven by managers’ competitive fear that came to be known as FOMO (‘Fear of Missing Out’). Prof. Bender’s critique points to ‘longtermism,’ a hyper-focus on the distant horizon, while eclipsing more urgent current issues of misrepresentation, discrimination, and AI errors. Still the legitimate question remains, how should artificial intelligence be governed? How Should AI Be Governed? As we explore the labyrinth of AI governance, it’s imperative to first recognize the importance of ethical and safety principles in its development and implementation. Similar to other technologies, there are already in place industrial practices guidance and regulation of AI, not only for basic industrial safety, but also for ethics. AI poses unique challenges compared to previous technologies, necessitating tailored regulations. Determining how to regulate it involves more than just legal measures by governments and agencies. How do we develop an overall technical framework for AI governance? In 2008, Prof. Lawrence B. Solum from the University of Illinois College of Law published a paper that analyzed internet governance models. These include the different models of self-governance, market forces, national and international regulations, and even governance through software code and internet architecture. This framework can also be applied to AI governance. Considering the full range of mechanisms — industry standards, legal frameworks, and AI systems regulating other AI systems. Governance necessitates not one form, but a comprehensive approach with multiple models of regulation. It requires long-term considerations, yet must address short-term immediate challenges so that it ensures responsible and ethical development of AI. By integrating industry standards with legal frameworks and technology-specific regulations, we can work towards creating a sustainable and ethical AI ecosystem. What are the Key Principles for Ethical and Safe AI? The past decade has been marked by a surge in technical and public policy discourse aimed at establishing frameworks for responsible AI that go far beyond “ Asimov’s Three Laws ,” which protect human beings from robotics gone awry. The plethora of notable projects includes: The Asilomar AI Principles (sponsored by the Future of Life Institute), The Montreal Declaration for Responsible AI, the work by IEEE’s Global Initiative on Ethics of Autonomous and Intelligent Systems, the European Group on Ethics in Science and New Technologies (EGE), and the ISO/IEC 30100:2018 General Guidance for AI. These undertakings have subsequently inspired specific corporate policies, including, for example, the Microsoft Responsible AI Standard v2 and the BMW Group Code of Ethics for AI. There are so many other notable attempts to provide frameworks, perhaps too many. A useful cross-framework analysis by Floridi and Cowls examined six of the most prominent expert-driven frameworks for governing AI principles. They synthesized 47 principles into five: Beneficence: Promoting well-being, preserving dignity, and sustaining the planet. Non-Maleficence: Focusing on privacy, security, and exercising “capability caution.” Autonomy: Upholding the power of individuals to make decisions. Justice: Promoting prosperity, preserving solidarity, and avoiding unfairness. Explicability: Enabling the Other Principles through Intelligibility and Accountability. These principles provide a framework to guide ethical decision-making in AI development. That last one is AI’s distinctive stamp on the ethical spectrum. AI should not just ‘do,’ it must ‘explain.’ Unlike most previous technological advancements like the similar foundational principles of bioethics, artificial intelligence should be required to explain itself and be accountable to users, the public, and regulators. Are These Principles Being Implemented? Yes. Virtually all major companies engaged in artificial intelligence are members of the Partnership on AI and are individually implementing some form of governing principles. The partnership comprises industry members (13), nonprofit organizations (62) and academic institutions (26). It also is international, operating across 17 countries. The community’s shared goal is to collaborate and create solutions that ensure AI advances positive outcomes for people and society. Members include companies such as Amazon, Apple, Google, IBM, Meta, Microsoft, OpenAI, and organizations like the ACM, Wikimedia, the ACLU, and the American Psychological Association. Notably, large global corporations that have implemented such principles are complex global entities. They require parallel implementation by division or geography. For example, AstraZeneca, as a decentralized organization, has set up four enterprise-wide AI governance initiatives, including: overarching guidance documents, a Responsible AI Playbook, an internal Responsible AI Consultancy Service & Resolution Board, and the commissioning of AI audits via independent third parties. AI audits are a key part of any compliance structure, and are recommended in many frameworks. This enterprise model is a sort of ‘principles of AI principles’. AI Ethics: A Form of Governmental Competitive Differentiation In establishing governmental principles, Europe is a trailblazer. In September 2020, the EU completed its EAVA ethical AI framework . The key conclusion: by exploiting a first-mover advantage, a common EU approach to ethical aspects of AI has the potential to generate up to €294.9 billion in additional GDP and 4.6 million additional jobs for the European Union by 2030. Governments can feel FOMO too. The framework emphasizes that existing values, norms, principles and rules are about governing the action of humans and groups of humans as the key source of danger, not designed for algorithms. The EU warned “the technological nature of AI systems, and their upcoming features and applications could seriously affect how governments address four ethical principles: respect for human autonomy, prevention of harm, fairness, explicability.” Literally every government is adopting some form of ethical AI framework. The 2018 German AI strategy contains three commitments: make the country a global leader in AI, protect and defend responsible AI, and integrate AI in society while following ethical, legal, cultural and institutional provisions. Similarly, the 2019 Danish national strategy for artificial intelligence includes six principles for ethical AI: self-determination, dignity, responsibility, explainability, equality and justice, and development. It also provides for the establishment of a national Data Ethics Council. In 2021, the US launched the National Artificial Intelligence Initiative to ensure US leadership in the development and use of trustworthy AI. In 2022, the White House Office of Science and Technology Policy issued the Blueprint for an AI Bill of Rights.This June, the European Parliament passed the European Artificial Intelligence Act , which not only regulates commercial use of AI, but sets principles addressing government use of AI (e.g., limiting national surveillance technology). But What About Military AI? In most dystopian AI fiction, military AI takes over. We’re especially worried about Colossus, Skynet and Ultron, the most evil AI presented in film. In real life, most nations provide for separate governance of AI for defense and security. In 2020, the US Department of Defense, Joint Artificial Intelligence Center, adopted AI Ethical Principles for governance of combat and non-combat AI. The five principles are that AI is responsible, equitable, traceable, reliable and governable.
a person is holding a green leaf in their hand .
20 Sep, 2022
As the United Nations General Assembly (UNGA) debates opens in New York City with the theme of “A Watershed Moment,” the world faces unprecedented and interconnected crises: a tipping point for climate change, the global pandemic, war in Ukraine, and runaway inflation. Despite these seemingly intractable problems, the UN argues that there are transformative solutions to these crises. We wholeheartedly agree. As investors, it is our job – a critical one – to seek innovations that solve real-world problems, especially in terms of environmental, social and governance ( ESG ) goals. Several of our investments actively tackle issues that are high on the agenda of the UN’s Sustainable Development Goals . With this in mind, it is timely that we give a progress report in this mid-investment period: much of it exceeds our expectations, especially within ESG. As a firm, we actively support women . Four out of eight of our venture partners are women, and several of our companies are female-founded and led: Scopio , Whizmo and Vertical Harvest . A few high notes on our investments: Vertical Harvest (Jackson, Wyoming) Vertical Harvest actively works to promote sustainable agriculture that is local, and uses 90 percent less water and land. Consider the negative environmental impact of industrial agriculture and how climate change is impacting agricultural prices , and it is clear why farm efficiency is needed. Vertical Harvest’s urban vertical farms use AI and IoT to improve yields and build smarter cities in locations across the US, starting with its first farm in Jackson, Wyoming and planned expansion across the US. A whopping 100,000 pounds of produce are produced per year from only a tenth-of-an-acre plot of land and its employment scheme provides a model for the industry, as Vertical Harvest’s staff consists of 40% neurodiverse employees. The company was recently profiled on CBS This Morning , and was nominated by Fast Company as one of the Best Workplaces for Innovators . Elevated Signals (Vancouver) Elevated Signals is an AI-driven enterprise software platform that radically streamlines controlled environment agriculture (CEA) operations from seed to sale, increasing the environmental impact of CEA companies. Consider the impact better connectivity has on agriculture: a McKinsey report which chronicles the ways improvements in technology can yield massive agricultural growth: “Artificial intelligence, analytics, connected sensors, and other emerging technologies could further increase yields, improve the efficiency of water and other inputs, and build sustainability.” The report also states that greater connectivity would yield $500 billion in additional value in terms of global gross domestic product by 2030, creating efficiencies that would alleviate pressure on farmers. WayOut (Stockholm, Sweden) Wayout is an IoT enabled water purification technology system that fits within a shipping container, can be rapidly deployed almost anywhere on Earth and provides 3,000 people every day with perfect drinking and cooking water. Wayout leads the way in terms of providing a direct method of disrupting obsolete and aging infrastructure to serve the needs of people in remote areas. With IP complete, deployed systems and orders in place, and partners such as SIEMENS, Alfa Laval and Ericsson, it will meet orders globally. Profiled in WIRED magazine, WARP news, Forbes , the company aims to rid the world or water scarcity and stress, a massive problem to tackle, considering 1.1 billion people lack access to clean water, and much of the world still suffers from water-borne illness, such as cholera and typhoid fever. Whizmo (Canada/UAE/Costa Rica) A peer-to-peer mobile money platform that empowers the unbanked to receive, transfer, remit and pay value without having a bank account in emerging economies. Whizmo is like m-pesa for emerging economies such as Dubai and Costa Rica. Whizmo saves a great deal of time and money for people who otherwise wait in long lines to receive money, make remittances, or have to pay high fees. Consider the opportunities of financial inclusion , and its ability to lift billions out of poverty – especially women – as 1.4 billion adults have no access to banking according to the World Bank. Scopio (Los Angeles and New York) Female founders Christina Hawatmeh and Nour Chamoun are making waves in the creator economy, with their company Scopio, aka Scope it out. Allowing photographers from anywhere to sell their photos everywhere, it makes images – NFT, photos, and art – more accessible and diverse. The company’s platform empowers artists and creators in far-flung corners globally and Scopio recently published a book featured in Entrepreneur: The Year Time Stopped with HarperCollins. CEO Christina Hawatmeh was listed in the top 15 Entrepreneurs to follow in 2021 by New York Finance and co-founder Nour Chamoun was featured in the Forbes 30 Under 30.
a black and white photo of a bridge with a cloudy sky in the background .
20 Sep, 2022
Preamble Raiven Capital is a global early-stage technology venture capital fund that believes in the power of innovation. The fund seeks to strengthen the ecosystems that it invests in, building bridges and contributing to thought leadership in venture capital. The goal is to foster innovation and provide insight to founders across the world, contributing to their operational playbooks. This whitepaper summarizes findings from Raiven’s inaugural research project. In a world where the future of work is already here, we wanted to provide deeper insights into the changing work world. In addition, how do we understand and operationalize insights from across the literature? Raiven’s larger goal is to take a deeper look at how to support tech companies and navigate the landscape of remote work in a more nuanced way. Abstract The current state of business and organizational literature looks at the future of work and addresses best practices in hybrid and remote work, the changing workplace culture and the leadership required to lead in this complexity. Five startups participated in a two-hour scenario and transcripts of the sessions were analyzed according to established qualitative methodologies. The timed scenario involved a challenging situation a tech company would normally face. Partway through the scenario a change that increased the stress/urgency of the situation was introduced. Once the scenario was completed, the teams were each invited to debrief. A new theme that emerged from the research: an organization that creates a culture map for the whole team that lives within it and is based on a high emotional intelligence, is able to navigate the stressors of a remote work environment with more clarity, innovation and ease. In addition, the clearer the map for external relationships, the clearer the strategy is to address challenges that arise due to client needs. Companies that had a clear internal and external relationship map had better success. The tools bridged the gap of remote work to create better cohesion and symmetry within teams. Culture maps (internal) and relationship maps (external) became the “glue” that helped remote teams generate trust, communicate effectively, and work more efficiently and innovatively within teams. Background The future of work is a topic often addressed in business and tech literature. Discussion includes many trends and “how to” guides. Much of it focuses on a post-pandemic phenomenon: “The Great Resignation” which includes the 40% of tech workers that have left or planned to quit (Deczynski, 2021). The pandemic reinforced individuals’ beliefs – that their life had value and mattered. Work was not just a place to be. Now, employees want money, benefits, flexibility not just in days, but in how they spend their hours. They want a workplace that fosters creativity and collaboration, and offers a healthy workplace culture (Deczynski, 2021, Downs, 2021). People quit six-figure jobs to prioritize mental health and travel. They also began searching for fulfilling, flexible careers. The cult of being busy is no longer acceptable. People want life-work integration (Groff, 2021, Fox, 2021). Even the four-day work week is passé. The broader question: What does the organization need to actually “work?” How does it collaborate, meet individual differences and needs, foster productivity and happiness (Collin, 2021, Weikle, 2021)? How do leaders and organizations respond to navigate the complexity? The current literature falls into three main categories: hybridized or remote work, workplace culture, and leadership. Hybridized or Remote Work The literature reinforces that remote, or asynchronous environments can work, but what is critical is creating a culture that values it. Remote work will remain and account for 48% of the workforce by 2030 (Bleimschien, 2021). Communication is key. Team members must learn to navigate and communicate effectively across time zones and cultures. Collaboration styles may differ. Private communication is not helpful most of the time.Transparent work where others can see one’s work and pick up where others left off make most meetings unnecessary. If there are meetings, who attends? Are there minutes for review? Agreed upon deliverables provide measurable results (Tucker, 2021). Meetings should be in response to specific milestones; this fosters productivity (Carr, 2021). Each participant should have clear engagement and understand how to engage in advance of the meeting so that meetings are productive, and have full participation and buy-in (Steen, 2021). Note: a seamless flow of information between teams and immediate access to information and use of all technology is required for full participation (Bleimschein, 2021). Does remote work “work”? According to the research, it is built for cooperation not collaboration (Ramesh, 2021). Others argue that short-term productivity goes up, but long-term creativity goes down because there is a damper on collaboration and innovation. Real-time conversations and corridor chat are not often replicated in the virtual world (Stillman, 2021). Mentorship and developing new friendships become more difficult (Glasser, Cutler, 2021). Unplanned “collision conversations” are needed. These types of conversations are exciting, full of brainstorming and innovations (Glasser, Cutler, 2021). The literature states: “Workers want to work with people they like and in systems that engage them for the money they think they’re worth. A virtual experience that fails to deliver professional fellowship and intriguing challenge, will cause a worker to feel unconnected from their work.” Intentional collaboration is key (Carr, 2021). People need to make sense of things with one another. Humanizing experiences engender belonging and trust (Hobson,2021). Employee experience is just as important and can include virtual water cooler and townhall discussions, as well as socializing instead of agenda-based meetings (Hobson, 2021). Building a hybridized and remote work environment is not a one-size-fits-all. It is important to understand what models enable the best workflows, despite the history of the organization. Workplace Culture Workplace culture consists of tacit agreements about values, ethics and operations that shape the attitudes and behaviors within an organization. They define what is encouraged, discouraged, accepted nor rejected within a group (Groysberg, Lee, Price & Cheng, 2018). In post-pandemic digital workplaces, the culture is being redefined. Some workplaces always strived for ethics, integrity, wellness, creativity, diversity & inclusion. These values are now in high demand from employees and have become a factor in company resignations. Employees and investors value integrity. Honest brands are valued (Schwates, 2021). Employees want to belong and be recognized for accomplishments, while being able to accommodate family obligations (Ramesh, 2021). Mike Prokopeak, editor in chief of Reworked, noted that in the digital workplace, the human element of work is the most important. “The team we build, the people we develop and support, and the mission we choose to pursue together…It’s the values we share, the dreams we pursue together, and the quality of our relationships that will define whether or not we succeed.” (Rodgers & Nicastro, 2021, Williams, 2021). In other words, technology will only get us so far. The capacity to collaborate is the biggest predictor of a remote team’s success. Emotional intelligence is needed: who is best at what? What else is important: clearly defined leadership, communication, coordination, transparency, time management and responsiveness. Emotional intelligence is needed for effective meetings (Moore, 2021) and to navigate what is needed for remote work. Social skill and social perceptiveness are relevant too (Riedl, Malone & Woolley 2021, Moore, 2021). Leadership In the future of work literature, leadership needs to understand that remote and hybridized work is here to stay and requires a reframing: work is not just hours given. An employee’s engagement needs to be meaningful, not rote. It is important to shift to valuing productivity over longer days (Ramesh, 2021). Leaders need to challenge the assumptions that underlie facetime (Stockpole, 2021), and they need to respect time outside work. There must be mutual trust that work will get done, especially with flexible hours. An idea: evaluate work based on productivity, not time (Stewart, 2021, Rodgers & Nicastro, 2021). Leaders need to understand that creativity comes from incubating and shifting focus, so downtime is equally important to increasing productivity (Nova, 2021). This shift in thinking allows for upward mobility despite remote locations. Leaders need to think human centric and must have the skills to manage human differences (Carr, 2021). Creating a healthy and productive workplace culture is related to emotional intelligence. Creating a team of lifelong learners who possess self-leadership and interpersonal engagement is also key (Rodgers & Nicastro, 2021). Further, hybrid or remote work must be accessible to all, whether it be equipment for a home office, training on technology, or accessible child care, especially for women. Women end up being responsible for the majority of unpaid work for child care, home care or elder care when working from home (Youn, 2021, Hobson, 2021, Rooney, 2021). The Gap and the Research As was noted before, the literature states that the future of work is here, stressing that key high-level assumptions are shifting. Actions and attitudes need to follow in response. Technology is already at the cutting-edge of many of these trends. Qualitative analysis allows for a small sample size to show repeated themes. These reveal new insight that can only be found in listening to experience and deeper verbal content versus a question and answer-type analysis. The simulation began with a situation each team would address in their day-to-day operations, somewhat stressful, with some time pressure. Then, at the 30 minute mark, they were thrown a curveball. The new information in the scenario significantly increased the pressure. Each team had 1.5 hours to approach and resolve the situation together. Then, each team had the opportunity to debrief and reflect on their experience and their individual performance. Each section (the initial scenario, the curveball, and the debrief) revealed aspects about the team, their communication, their collaboration, and their ability to navigate remote work. What became clear: the less noise and more creativity possible through a seamless ability to navigate remote work, the more capacity the organization has to pursue its goals. The Results The results reinforced, but also went beyond the literature. The companies that solved the scenario well – understanding instructions, collectively and efficiently coming up with an approach and response to the initial scenario, approaching the crisis in a clear, calm way with an integrated solution – reinforced the current literature about remote work. The others failed to come to a solution. However, two crucial themes also emerged. The companies that were able to solve the scenario easily and successfully had a very clear internal culture map , which is the foundation for a very clear external relationship map . “Culture is the tacit social order of an organization: It shapes attitudes and behaviors in wide-ranging and durable ways. Cultural norms define what is encouraged, discouraged, accepted, or rejected within agroup. When properly aligned with personal values, drives, and needs, culture can unleash tremendous amounts of energy toward a shared purpose and foster an organization’s capacity to thrive” (Groysberg, Lee, Price & Cheng, 2018). When culture is embedded throughout the layers of the organization with behaviors, values, processes and operational actions and strategy, a “map” is created for the organization. It is a silent language that guides the journey, leading to ways that stakeholders, customers and clients relate, helping define the experience from beginning to end. The responses to company scenarios reflect that the culture map (inward facing) and the relationship map(external facing) become a glue that helps mitigate and manage conflict in remote work, communication and relationships. Specifically, as reflected in the literature, those companies that expressed emotional intelligence (self-perception, interpersonal skills, problem-solving and stress management skills) allowed a foundation for managing day-to-day relationships and approached crisis in a consistent and more efficient way that helped the teams feel connected and productive while giving a general sense of well-being. Beyond the mission and vision of a company, companies that handled the scenario most successfully had integrated and owned their vision. They built the culture map by sharing the vision collectively and individually. It did not exist just with a single or few people. There was a common sense of shared values that existed from a culture that engaged its people in an ongoing co-creation. While the vision may have stemmed from one person’s idea, the culture was now jointly held, at least to the degree of each person’s role. When the role of each team member was very clear. Each person knew each other’s strengths and weaknesses and how they fit together. There was clear camaraderie that allowed them to understand each other in nuanced ways. There was a shared understanding of differences in styles, culture, and personalities and they were able to manage time zones and were conversant with the shared technology. The companies that had cultures that reflected emotional intelligence (self-perception, interpersonal skills, problem solving and stress management skills) also were able to discuss vulnerability (within themselves and within their organization), gaps, ethics, relationships, and manage stress in a way that allowed these perceived challenges to become clear opportunities to address the gaps and problem solve to create better solutions or a better process. Culture maps defined roles and processes. These were often understood processes for crisis in communication, and clear channels, including platforms of technology, for communication. Rather than seeing them as negative, stressful situations were seen by these companies as another avenue to implement a strategic roadmap underpinned by an embedded culture. Creativity, communication and problem solving felt “natural”. The culture created bonding, a sense of shared values and trust, which was crucial, especially in a remote work environment. Leadership and engagement existed at all levels. The CEO took a secondary seat, inviting contribution and collaboration, waiting for synergy to create itself. Each member was equipped with the knowledge of all remote platforms/interactive technology used or needed. Each participant had a sense of logistics and priorities that the team needed to address the scenario. The culture and values being held by each team member allowed all voices to be heard or to speak up, all voices to challenge and question and all voices and contributions valued. Each person’s personality, area of expertise, and how their part fit into the whole was acknowledged. This clarity of what the cultural identity of the company is and is not, allowed most successful companies a greater sense of confidence. Teams took time to understand the scenario, grasp its implications for the team rather than plunging forward. They were able to assess the challenge in the context of who they were and apply it to their vision and culture to ensure that their approach matched who they were. This internal culture map led to a clear external relationship map. The companies that had the most success with the scenario were able to look at the customer/client and see who they were and create a clearer bridge to addressing the client’s needs and expectations in a way that was consistent with their culture. They were also able to assess if the client/customer was a fit and were ok with losing a client if it meant it was not a fit for the culture. They had a clear understanding of the experience they wanted their customer to have at each step with them and wanted to understand the impact of their approach on that experience. That external relationship map made it easier and more efficient to find solutions. In fact, it provided a foundation that allowed for more creativity in solving the crisis. At each choice point, these two maps: the “internal culture map” and the “external relationship map” became the foundation for decisions that allowed flow, ease and confidence. It created space for creativity,communication and a deeper capacity to analyze and solve the situation at hand. The fuzzier the internal culture map the fuzzier tackling the rest of the scenario became. What that means is that a number of organizational behaviors were affected on a varying basis. These included: an understanding of instructions, the roles individuals played, how they tackled the problems, how the team approached solutions, how the team discussed values, ethics, and managed themselves and each other. In companies where the leader still primarily held the culture, the conversation revolved around pleasing or supporting the leader’s vision. Dissent, challenge, looking at gaps or co-creation were less visible. Valuing and group dynamics were more unidirectional rather than collaborative and were not directed equally as members of the team. Communication didn’t have a flow. It was more haphazard. It took longer. Ironically, it required the leader to take more time and space to have to manage rather than having it held with the group. Gaps were often overlooked or there was inclination toward false bravado, relying on what worked, rather than a serious look at what wasn’t working. Without a clear culture map, it was also more difficult to test the gaps against where the company’s stated vision and culture and values indicated they wanted to be. There wasn’t as much confidence that allowed each person to contribute. They weren’t as able to see the client/customer needs and bridge to them. The less emotional intelligence and maturity within the team, the less insight they had about the process itself and its impact on creating solutions. The scenario highlighted that the clearer a company’s internal culture map and the clearer their external relationship map, the clearer the roles of each member were. Synergy existed and leadership was held within the whole team. Processes existed and unfolded more efficiently. The clearer the external relationship map, the better the understanding of the customer/client and the more they were able to execute their strategy to meet the experience they wanted the customer/client to have. Companies that valued emotional intelligence expressed more confidence and better communication. There was a sense of belonging, valuing and trust within the team. The internal culture map and external relationship map bridged the gap of remote work to create better cohesion and symmetry within teams. These initial qualitative findings indicate the relationship between clear culture maps and internal and external relationships. Maps impact the ability to deliver strategy. They also create ease in remote work environments, especially in the tech industry. Further research would be required both qualitatively and quantitatively to further validate these findings. In conclusion, the implication for venture capital funds’ portfolio companies: startups that have clear internal and external culture maps embedded by top leadership can handle stress, crisis and the uncertainty. They are more efficient, productive, innovative and creative. References Bleimschein, Benedict Inc. July 27, 2021. “Use This Pyramid Framework to Effectively Manage HybridTeams.” Carr, David F. Venture Beat. October 19, 2021 “Gartner Prescribes a Human-centric, Hybrid-Focus for the Future of Work.” Collin, Mathilde. November 9, 2021. POVL . “The 4-Day Workweek is Not the Future of Work. The Future is Flexibility.” Deczynski, Rebecca. September 27,2021. MSN. “The Great Resignation is Going to Be a Shock –Hitting Some Industries Harder Than Others” Downs, Sophie. Inc. “PwC Survey: Employers Struggling to Keep Up With Changing Employee Expectations.” Fox, Erica Ariel. Forbes. October 18, 2021. “Work-Life Balance is Over – The Life-Work Revolution is Here” Glasser, Edward and Cutler, David. The Economist. September 24, 2021. “You May Get More Work Done at Home. But You’d Better Have Ideas at The Office.” Groff, Bree. Fast Company. October 3, 2021. “Leaders Are Thinking About Hybrid Work in a One Dimensional Way. There is a Better Approach. Groysberg, Boris, Lee, Jeremiah, Price, Jesse, and Cheng, J.yo-Jud. Harvard Business Review, January, 2018 “Corporate Culture.” Hobson, Nick. Inc. November 5, 2021. “The Obvious Psychological Truth Left out of Most Future of Work Conversations.” Nick Hobson. Inc. November 17, 2021. “Microsoft Research Reveals the Biggest Downside to Remote Work and Here’s How to Address it.” Moore, Dene. Globe and Mail. December 28, 2021. “Emotional Intelligence Trumps IQ in the Workplace and Women Have More of It.” Nova, Annie. CNBC. December 26, 2021. “How We Work From Home Needs to Change in the New Year.” Ramesh, A.R. Entrepreneur Magazine. July 6, 2021 “Reimagining the New Mandate For the Future Workforce” Riedl, Christoph, Malone, Thomas W. and Woolley, Anita W. Oct 21, 2021. MIT Sloan. “The Collective Intelligence of Remote Teams.” Rodgers, Gabrielle and Nicastro, Dom. Oct 20 2021. CMSWire. “5 Takeaways from the Fall 2021 Digital Workplace Experience Conference.” Rooney, Katharine. The Future. October 13, 2021 “These 5 Themes are Shaping the Future of Work.” Schwantes, Marcel. Inc. June 29, 2021 “Warren Buffet Thinks You Should Hire For Integrity First.” Stewart, Ben. Fast Company, July 29, 2021. “This Should be the Remote Workers’ “Bill of Rights.” Steen, Jeff. Inc. February 16, 2022. “High Profile Study Reveals Why Most Meetings are Ineffective. It Only TakesOne Simple Step to Fix It.” Stillman, Jessica. Inc. September 20, 2021. “New Microsoft Study of 60,000 Employees: Remote Work Threatens Long-Term Innovation” Stockpole, Beth. MIT Management. July 27, 2021 “Digital Transformation After the Pandemic.” Tucker, Matt. Entrepreneur. July 24, 2021. “How to Create an Asynchronous Work Culture.”  Weikle, Brandie. CBC Radio. Dec 20, 2021. “Forget 9-5. These Experts Say the Time Has Come for the Results-Only Work Environment.” Williams, Shannon. Dec 27, 2021. IT Brief New Zealand. “The Future of Work is About People, Not Tech.”
three men are posing for a picture in front of a toronto sign .
08 Sep, 2022
At Raiven, we are happy to share that we had a great summer. It is also time to roll up our sleeves and get back to work. We’ve been on the move: traveling, meeting founders, investors, and seeking the latest technologies. This post is a recap – of our Nordic trip, where we learned about the latest in food and AgTech, and also our participation in the Collision Conference in Toronto. We also have some new investments that we are excited to share with you. NORDIC STOP: PASSION FOR FOODTECH We were delighted to spend time with Raiven LP and Advisory Group member Björn Öste in Stockholm and Lausanne. It goes without saying that he is a visionary foodtech entrepreneur and co-founder of Oatly, which went public last year. We also participated in two major conferences: The Future of Food Summit and Stockholm TECH Live 2022 . Björn gave the keynote presentation at FoodHack in Lausanne. His insights were inspiring: He spoke of the journey of Oatly and gave the crowd tips on navigating the food sector, strengthening the resolve of many founders on their entrepreneurial journey. Founders remarked that Björn’s talk gave muchto think about. He offered ideas for creating new products even when no market exists, noting that an IPO or acquisition may be tough to imagine for founders in the food space.
a woman wearing a mask is sitting in front of a laptop computer .
16 Aug, 2022
RAIVEN LEADS RESEARCH ON THE FUTURE OF WORK
Portrait of Björn Öste. Photo credit: Kathryn Costello
04 Jun, 2022
OATLY CO-FOUNDER INVESTS IN RAIVEN CAPITAL
a black and white photo of a bridge over a body of water
03 Jun, 2022
Technology innovation knows no bounds. Neither do we.
Share by: