ARTIFICIAL INTELLIGENCE, PROMISE OR PERIL: PART 1 – AI ETHICS

Dr. James Baty • Jul 18, 2023
a close up of a statue of a man with a beard

By Dr. James Baty
Advisor, Raiven Capital


The headlines around AI are screaming for attention: Launching yet another AI company, Elon announced on his Twitter Spaces that he had warned Chinese leaders that digital superintelligence could unseat their government!, Other headlines herald the coming super-positive impacts on world economies: Goldman Sachs noted that generative AI could boost GDP globally by 7 percent.


Amid calls for more regulation, the debate surrounding artificial intelligence has taken a multifaceted turn, blending apprehensions with aspirations. The fears and uncertainties often act as catalysts for attention and advancement in AI. The technological prowess, the risks, the allure – it’s all a heady brew. While some workers clutch their paychecks fearing obsolescence, shrewd employers rub their hands together asking, “Can AI trim my overhead?”


In this three-part Dry Powder series, I will deconstruct the issues around AI governance: ethical frameworks, emerging governmental regulation and the impact AI governance is having in venture capital funds. As a technologist, my career designing and advising on large-scale tech architecture strategy has leveraged and suffered the previous two of Kai-Fu Lee’s ‘Waves of AI’. Clearly this third wave is big.


Setting the Stage: Ethical Principles of Artificial Intelligence

The question of AI safety and regulation has sparked heated discussion globally, especially as AI adoption spreads like wildfire. Despite calls for more regulation, the Washington Post reported that Twitch, Microsoft and Twitter are now laying off their ethics teams, adding to the industry’s increasing dismissal of those leading the work on AI governance.


This should give us pause: what are the fundamental ethical principles of AI? Why are some tech executives and others spending millions to warn the public about it? Should it be regulated?


Part of the answer is that fear sells. In part, AI is already regulated, but more of it is on the way.


First, Let’s Discuss “The Letter”

Enter the March storm: Pause Giant AI Experiments: An Open Letter. Crafted by the Future of Life Institute, and signed by many of the AI ‘hero-founders,’ (who warn us about AI, while they aggressively are developing it), this letter thundered through the scientific and AI community. There were calls for a six-month halt to AI research, while the red flag of existential threats was raised. 


The buzz generated by the letter was notable. But, hold the phone! Forgeries appeared among the signatures, echoing ChatGPT’s infamous “hallucinations.” Moreover, some of the actual signatories backtracked. Critically, many experts in AI research, the scientific community, and public policy underscored that the letter employed hype to peddle technology.


Case in point, Emily M. Bender, a renowned neurolinguistics professor and co-author of the first paper cited in the letter, expressed her discontent. She called out the letter for its over-the-top drama and misuse of her research, coining it as “dripping with #AIhype.” Bender’s comments are suggestive of a cyclical pattern in technology adoption, where fear and hype are instrumental drivers of decision-making. 


As technology historian David Noble documented, the adoption of workplace and factory floor automation that swept the 1970s and ‘80s was driven by managers’ competitive fear that came to be known as FOMO (‘Fear of Missing Out’). Prof. Bender’s critique points to ‘longtermism,’ a hyper-focus on the distant horizon, while eclipsing more urgent current issues of misrepresentation, discrimination, and AI errors. Still the legitimate question remains, how should artificial intelligence be governed?


How Should AI Be Governed?

As we explore the labyrinth of AI governance, it’s imperative to first recognize the importance of ethical and safety principles in its development and implementation. Similar to other technologies, there are already in place industrial practices guidance and regulation of AI, not only for basic industrial safety, but also for ethics.

AI poses unique challenges compared to previous technologies, necessitating tailored regulations. Determining how to regulate it involves more than just legal measures by governments and agencies. How do we develop an overall technical framework for AI governance?


In 2008, Prof. Lawrence B. Solum from the University of Illinois College of Law published a paper that analyzed internet governance models. These include the different models of self-governance, market forces, national and international regulations, and even governance through software code and internet architecture. This framework can also be applied to AI governance.


Considering the full range of mechanisms — industry standards, legal frameworks, and AI systems regulating other AI systems. Governance necessitates not one form, but a comprehensive approach with multiple models of regulation. It requires long-term considerations, yet must address short-term immediate challenges so that it ensures responsible and ethical development of AI. By integrating industry standards with legal frameworks and technology-specific regulations, we can work towards creating a sustainable and ethical AI ecosystem.


What are the Key Principles for Ethical and Safe AI?

The past decade has been marked by a surge in technical and public policy discourse aimed at establishing frameworks for responsible AI that go far beyond “Asimov’s Three Laws,” which protect human beings from robotics gone awry. The plethora of notable projects includes: The Asilomar AI Principles (sponsored by the Future of Life Institute), The Montreal Declaration for Responsible AI, the work by IEEE’s Global Initiative on Ethics of Autonomous and Intelligent Systems, the European Group on Ethics in Science and New Technologies (EGE), and the ISO/IEC 30100:2018 General Guidance for AI. These undertakings have subsequently inspired specific corporate policies, including, for example, the Microsoft Responsible AI Standard v2 and the BMW Group Code of Ethics for AI. There are so many other notable attempts to provide frameworks, perhaps too many.


A useful 
cross-framework analysis by Floridi and Cowls examined six of the most prominent expert-driven frameworks for governing AI principles. They synthesized 47 principles into five:


  1. Beneficence: Promoting well-being, preserving dignity, and sustaining the planet.
  2. Non-Maleficence: Focusing on privacy, security, and exercising “capability caution.”
  3. Autonomy: Upholding the power of individuals to make decisions.
  4. Justice: Promoting prosperity, preserving solidarity, and avoiding unfairness.
  5. Explicability: Enabling the Other Principles through Intelligibility and Accountability.


These principles provide a framework to guide ethical decision-making in AI development. That last one is AI’s distinctive stamp on the ethical spectrum. AI should not just ‘do,’ it must ‘explain.’ Unlike most previous technological advancements like the similar foundational principles of bioethics, artificial intelligence should be required to explain itself and be accountable to users, the public, and regulators.


Are These Principles Being Implemented?

Yes. Virtually all major companies engaged in artificial intelligence are members of the Partnership on AI and are individually implementing some form of governing principles. The partnership comprises industry members (13), nonprofit organizations (62) and academic institutions (26). It also is international, operating across 17 countries.


The community’s shared goal is to collaborate and create solutions that ensure AI advances positive outcomes for people and society. Members include companies such as Amazon, Apple, Google, IBM, Meta, Microsoft, OpenAI, and organizations like the ACM, Wikimedia, the ACLU, and the American Psychological Association.

Notably, large global corporations that have implemented such principles are complex global entities. They require parallel implementation by division or geography. For example, AstraZeneca, as a decentralized organization, has set up four enterprise-wide AI governance initiatives, including: overarching guidance documents, a Responsible AI Playbook, an internal Responsible AI Consultancy Service & Resolution Board, and the commissioning of AI audits via independent third parties. AI audits are a key part of any compliance structure, and are recommended in many frameworks. This enterprise model is a sort of ‘principles of AI principles’.


AI Ethics: A Form of Governmental Competitive Differentiation

In establishing governmental principles, Europe is a trailblazer. In September 2020, the EU completed its EAVA ethical AI framework. The key conclusion: by exploiting a first-mover advantage, a common EU approach to ethical aspects of AI has the potential to generate up to €294.9 billion in additional GDP and 4.6 million additional jobs for the European Union by 2030. Governments can feel FOMO too.


The framework emphasizes that existing values, norms, principles and rules are about governing the action of humans and groups of humans as the key source of danger, not designed for algorithms. The EU warned “the technological nature of AI systems, and their upcoming features and applications could seriously affect how governments address four ethical principles: respect for human autonomy, prevention of harm, fairness, explicability.”


Literally every government is adopting some form of ethical AI framework. The 2018 German AI strategy contains three commitments: make the country a global leader in AI, protect and defend responsible AI, and integrate AI in society while following ethical, legal, cultural and institutional provisions. Similarly, the 2019 Danish national strategy for artificial intelligence includes six principles for ethical AI: self-determination, dignity, responsibility, explainability, equality and justice, and development. It also provides for the establishment of a national Data Ethics Council.


In 2021, the US launched the National Artificial Intelligence Initiative to ensure US leadership in the development and use of trustworthy AI. In 2022, the White House Office of Science and Technology Policy issued the Blueprint for an AI Bill of Rights.This June, the European Parliament passed the European Artificial Intelligence Act, which not only regulates commercial use of AI, but sets principles addressing government use of AI (e.g., limiting national surveillance technology).


But What About Military AI?

In most dystopian AI fiction, military AI takes over. We’re especially worried about Colossus, Skynet and Ultron, the most evil AI presented in film. In real life, most nations provide for separate governance of AI for defense and security. In 2020, the US Department of Defense, Joint Artificial Intelligence Center, adopted AI Ethical Principles for governance of combat and non-combat AI. The five principles are that AI is responsible, equitable, traceable, reliable and governable.

AI generated image of a robot and a cowboy in the desert

These include the same AI concerns that the Floridi and Cowls taxonomy included under explicability — the ability to ‘disengage or deactivate deployed systems that demonstrate unintended behavior’. Universally, we agree that AI needs to be explainable, accountable and controllable. Don’t worry, there’ll be a kill switch on the Terminator.


Great! How to implement this controllability? There’s the recent story where USAF Chief of AI Test and Operations Col. Tucker Hamilton who, at the Royal Aeronautical Society’s Future Combat Air & Space Capabilities Summit described a “simulation” where an AI-controlled drone, tasked to destroy surface-to-air missile sites, decided that any human “no-go” decisions were obstacles to its mission. So, it killed the human operator. However, when trained not to kill the operator, it instead destroyed the communication tower to stop the operator from interfering with the mission. It turned out the scenario was a fiction and the story was amended, but it echoes the warning the movie WarGames illustrated 40 years ago.


In Conclusion:

The Governance of AI Ethics and Principles Is Growing,
With Significant Challenges


Circling back to “The Letter.” What remains of its looming questions?


Is Artificial General Intelligence (AGI) or Artificial Super Intelligence (ASI) an existential risk? The consensus is yes.


Do we have an adequately articulated systemic ‘Third Wave” of AI ethics to address this existential risk? Not yet. Is it worse than we think? Probably.


The recent concerns laid out by Geoffrey Hinton, the renowned deep learning expert who quit Google, as well as others is that our earlier risk assessment of AI is wrong. Hinton contends his previous belief that AI software needed to become much more complex — akin to the human brain — to become significantly more capable, and dangerous, was probably wrong. The consensus of researchers had believed this ‘more complicated than the human brain threshold’ was some time off – approaching the end of this century. Hinton now suggests that generative AI may exhibit a possibility to outsmart us in the near future, far before we reach AGI.


Many researchers and ethicists are focused on this looming shift from ‘generative’ transformer LLMs, to ‘agentic’ AI, self-directed, self-improving models with the power to act in the real world. In essence, the doomsday clock of existential risk AI is being sped up by an emerging weapons race between researchers working on open-source AI, and the large labs with big training models. This self-directed and self-improving AI presents both an identifiable existential risk, and an urgent demand for a pivot in the ethical AI policy debate. 


All this suggests what has been referred to as ‘third wave’ of AI ethics, beyond fairness, accountability, transparency, including not only the military’s ‘controllability’, but also addresses much larger system level issues in society.


As an example of this complexity, consider the issue of ‘informed consent’. Most ethical frameworks mandate that human subjects be informed if they are affected by AI systems, or that their personal data might be used by AI (e.g., patients informed of AI in medical devices). But what about the AGI itself? Part of the work on AI ethics is efforts to investigate a protocol for the ethical treatment of AGI Systems. Are they ‘conscious’ by some measure? Then should we have to obtain informed consent from them for their use? Would giving them “rights help make them more ethical?


Of course, there’s always Eliezer Yudkowsky’s (Machine Intelligence Research Institute) solution. He suggests that until there is such a plan to govern AGI/ASI: “We should shut down all advanced AI research, shut down all the large GPU clusters, shut down all the large training runs…No exceptions for governments and militaries.” Is he related to Sarah Conner? Speaking for the opposition, Marc Andreesen continues to assert that AI itself is our savior from existential risk. 


Still perhaps there is hope. Even while there may be claims of ‘ethics washing’ on the pronouncements of ethical principles of AI by some corporations and industry groups, the tide is turning. Binding regulations, like the EU AI Act, herald a new era where principles are reinforced with tangible enforcement. The penalties under the EU AI Act are three times the maximum penalties under their laws governing data, under General Data Protection Regulation (GDPR).


The good news for now is that we have started the conversation between the perspectives represented by the Partnership on AI (self-governance), and the emerging EU Artificial Intelligence Act (governmental regulation). We have shifted from ‘can we do something’ to ‘what do we do now?’


In Summary…


  • Artificial intelligence proposes a new challenge to the historical mechanisms of governance of societal ethics by moving beyond governing the actions of individuals alone, to governing ‘algorithms’.


  • Artificial intelligence creates unique challenges to ethical governance of technology, e.g., explicability, controllability, self-directed ‘agentic AI’.


  • Artificial intelligence governance requires finding the right mix of public interest, private, and governmentally-adopted frameworks of ethical principles.


  • The emerging ‘AI arms race’ suggests a we need a ‘next wave’ ethical and regulatory framework for ‘AI ‘arms control’ that could protect us from the risks of destruction that Hinton, Timnit Gebru, Margaret Mitchell and others highlight, and at the same time, deliver on the societal benefits promised by Andreesen and others.


The labyrinthine interplay of AI governance is growing, blending ethical aspirations with legislative teeth. It’s an odyssey that warrants close monitoring and active participation by all stakeholders.


Stay tuned for our part two of this series, where we examine the state of AI regulation.


A man and a robot are standing next to each other.
By Supreet Manchanda 15 May, 2024
At Raiven Capital our target investments are in founders using emerging technologies (AI, IoT, 6G, 5IR...) that enable highly scalable digital platforms & efficiencies in many key sectors. While AI, IoT, and 6G etc. are discrete technologies, 5IR / the Fifth Industrial Revolution is more so a unique philosophy of the application and impact of these and related technologies.
an aerial view of a picture frame in the middle of a park, Dubai
By Raiven Capital 05 Feb, 2024
Raiven Capital recently announced the launch in Dubai of its Dubai International Financial Center (DIFC) based $125M USD tech venture fund. We are proud of this accomplishment, and can’t wait to have new investors become part of our fund.
By Dr. James Baty, Supreet Manchanda and Paul Dugsin 15 Dec, 2023
by Dr. James Baty PhD, Operating Partner & EIR and and Supreet Manchanda & Paul Dugsin, Founding Partners RAIVEN CAPITAL This is our third release in our series on governing Artificial Intelligence – AI Promise or Peril This release consists of three sections AI Impact on Business & Markets AI Impact on VC Operations Impact of AI Governance in VC The first release on AI Ethics is available here . The second release on AI Regulation is available here . In our inaugural episode of our ‘AI, Promise or Peril’ series, we delved into the clamor surrounding Artificial Intelligence (AI) Ethics—a field as polarizing as it is fascinating. Remember the Future of Life Institute’s six-month moratorium plea, backed by AI luminaries? Opinions ranged from apocalyptic warnings to messianic proclamations to cries of sheer hype. In our second episode, we examined the chaos around the emerging AI Regulation, a cacophony of city, state, national, and international regulatory panels, pronouncements, and significant legislative and commission enactments. We examined the EU AI Act, and the US NIST AI Risk Framework amongst key models. We suggested there was a strong case for a US Executive Order on AI based on the NIST AI-RMF. Keep in mind that the US Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence was issued by President Biden on October 30th 2023. In addition, the United Nations announced in October the creation of a 39-member advisory body to address issues in the international governance of artificial intelligence. Our blog posts observed that the emerging solution to AI governance is not one act or law; it encompasses corporate self-governance, industry standards, market forces, national and international regulations, and especially AI systems regulating other AI systems. AI governance impacts not just those developing AI projects, but also those leveraging existing AI tools. Navigating this terrain involves a complex interplay of legal regulations and voluntary ethical standards. Whether you’re at the helm of an AI project or leveraging AI tools developed by others, a complex web of ethical and regulatory issues awaits.
a robot wearing headphones is thinking about something .
By Dr. James Baty 05 Oct, 2023
by Dr. James Baty PhD, Operating Partner & EIR and Tarek El-Sawy PhD MD, Venture Partner RAIVEN CAPITAL This is our second release in our series on governing Artificial Intelligence – AI Promise or Peril This release consists of two sections AI Regulatory Challenges & Models AI Regulation Examples & Issues The third release will cover AI from a VC perspective. The first release on AI Ethics is available here . In the inaugural episode of our ‘AI, Promise or Peril’ series, we delved into the clamor surrounding Artificial Intelligence (AI) ethics—a field as polarizing as it is fascinating. Remember the Future of Life Institute’s six-month moratorium plea, backed by AI luminaries? Opinions ranged from apocalyptic warnings to messianic proclamations to cries of sheer hype. We observed that the answer to AI governance isn’t a one-size-fits-all solution; rather, it’s a cocktail of corporate self-governance, industry standards, market forces, and international legislation—sometimes with AI policing itself. Our journey began with the first episode dissecting the growing landscape of AI ethics frameworks, concluding that the quest for the perfect blend of public, private, and governmental guidelines is only just beginning. In today’s installment, let’s attempt an overview of formal AI regulation, including challenges in regulating AI, primary regulatory models, key regulation implementation, and outstanding issues. No single guide to AI governance can cover everything. Our goal is a meta-guide that highlights the key issues, actors and approaches, with an idea to be informed enough to evaluate how AI impacts the VC ecosystem.
a robotic hand is reaching out to a human hand .
By Dr. James Baty 18 Jul, 2023
By Dr. James Baty Advisor, Raiven Capital The headlines around AI are screaming for attention: Launching yet another AI company, Elon announced on his Twitter Spaces that he had warned Chinese leaders that digital superintelligence could unseat their government!, Other headlines herald the coming super-positive impacts on world economies: Goldman Sachs noted that generative AI could boost GDP globally by 7 percent. Amid calls for more regulation, the debate surrounding artificial intelligence has taken a multifaceted turn, blending apprehensions with aspirations. The fears and uncertainties often act as catalysts for attention and advancement in AI. The technological prowess, the risks, the allure – it’s all a heady brew. While some workers clutch their paychecks fearing obsolescence, shrewd employers rub their hands together asking, “Can AI trim my overhead?” In this three-part Dry Powder series, I will deconstruct the issues around AI governance: ethical frameworks, emerging governmental regulation and the impact AI governance is having in venture capital funds. As a technologist, my career designing and advising on large-scale tech architecture strategy has leveraged and suffered the previous two of Kai-Fu Lee’s ‘Waves of AI’. Clearly this third wave is big. Setting the Stage: Ethical Principles of Artificial Intelligence The question of AI safety and regulation has sparked heated discussion globally, especially as AI adoption spreads like wildfire. Despite calls for more regulation, the Washington Post reported that Twitch, Microsoft and Twitter are now laying off their ethics teams , adding to the industry’s increasing dismissal of those leading the work on AI governance. This should give us pause: what are the fundamental ethical principles of AI? Why are some tech executives and others spending millions to warn the public about it? Should it be regulated? Part of the answer is that fear sells. In part, AI is already regulated, but more of it is on the way. First, Let’s Discuss “The Letter” Enter the March storm: Pause Giant AI Experiments: An Open Letter . Crafted by the Future of Life Institute , and signed by many of the AI ‘hero-founders,’ (who warn us about AI, while they aggressively are developing it), this letter thundered through the scientific and AI community. There were calls for a six-month halt to AI research, while the red flag of existential threats was raised. The buzz generated by the letter was notable. But, hold the phone! Forgeries appeared among the signatures, echoing ChatGPT’s infamous “ hallucinations .” Moreover, some of the actual signatories backtracked. Critically, many experts in AI research, the scientific community, and public policy underscored that the letter employed hype to peddle technology. Case in point, Emily M. Bender, a renowned neurolinguistics professor and co-author of the first paper cited in the letter, expressed her discontent. She called out the letter for its over-the-top drama and misuse of her research, coining it as “dripping with #AIhype.” Bender’s comments are suggestive of a cyclical pattern in technology adoption, where fear and hype are instrumental drivers of decision-making. As technology historian David Noble documented, the adoption of workplace and factory floor automation that swept the 1970s and ‘80s was driven by managers’ competitive fear that came to be known as FOMO (‘Fear of Missing Out’). Prof. Bender’s critique points to ‘longtermism,’ a hyper-focus on the distant horizon, while eclipsing more urgent current issues of misrepresentation, discrimination, and AI errors. Still the legitimate question remains, how should artificial intelligence be governed? How Should AI Be Governed? As we explore the labyrinth of AI governance, it’s imperative to first recognize the importance of ethical and safety principles in its development and implementation. Similar to other technologies, there are already in place industrial practices guidance and regulation of AI, not only for basic industrial safety, but also for ethics. AI poses unique challenges compared to previous technologies, necessitating tailored regulations. Determining how to regulate it involves more than just legal measures by governments and agencies. How do we develop an overall technical framework for AI governance? In 2008, Prof. Lawrence B. Solum from the University of Illinois College of Law published a paper that analyzed internet governance models. These include the different models of self-governance, market forces, national and international regulations, and even governance through software code and internet architecture. This framework can also be applied to AI governance. Considering the full range of mechanisms — industry standards, legal frameworks, and AI systems regulating other AI systems. Governance necessitates not one form, but a comprehensive approach with multiple models of regulation. It requires long-term considerations, yet must address short-term immediate challenges so that it ensures responsible and ethical development of AI. By integrating industry standards with legal frameworks and technology-specific regulations, we can work towards creating a sustainable and ethical AI ecosystem. What are the Key Principles for Ethical and Safe AI? The past decade has been marked by a surge in technical and public policy discourse aimed at establishing frameworks for responsible AI that go far beyond “ Asimov’s Three Laws ,” which protect human beings from robotics gone awry. The plethora of notable projects includes: The Asilomar AI Principles (sponsored by the Future of Life Institute), The Montreal Declaration for Responsible AI, the work by IEEE’s Global Initiative on Ethics of Autonomous and Intelligent Systems, the European Group on Ethics in Science and New Technologies (EGE), and the ISO/IEC 30100:2018 General Guidance for AI. These undertakings have subsequently inspired specific corporate policies, including, for example, the Microsoft Responsible AI Standard v2 and the BMW Group Code of Ethics for AI. There are so many other notable attempts to provide frameworks, perhaps too many. A useful cross-framework analysis by Floridi and Cowls examined six of the most prominent expert-driven frameworks for governing AI principles. They synthesized 47 principles into five: Beneficence: Promoting well-being, preserving dignity, and sustaining the planet. Non-Maleficence: Focusing on privacy, security, and exercising “capability caution.” Autonomy: Upholding the power of individuals to make decisions. Justice: Promoting prosperity, preserving solidarity, and avoiding unfairness. Explicability: Enabling the Other Principles through Intelligibility and Accountability. These principles provide a framework to guide ethical decision-making in AI development. That last one is AI’s distinctive stamp on the ethical spectrum. AI should not just ‘do,’ it must ‘explain.’ Unlike most previous technological advancements like the similar foundational principles of bioethics, artificial intelligence should be required to explain itself and be accountable to users, the public, and regulators. Are These Principles Being Implemented? Yes. Virtually all major companies engaged in artificial intelligence are members of the Partnership on AI and are individually implementing some form of governing principles. The partnership comprises industry members (13), nonprofit organizations (62) and academic institutions (26). It also is international, operating across 17 countries. The community’s shared goal is to collaborate and create solutions that ensure AI advances positive outcomes for people and society. Members include companies such as Amazon, Apple, Google, IBM, Meta, Microsoft, OpenAI, and organizations like the ACM, Wikimedia, the ACLU, and the American Psychological Association. Notably, large global corporations that have implemented such principles are complex global entities. They require parallel implementation by division or geography. For example, AstraZeneca, as a decentralized organization, has set up four enterprise-wide AI governance initiatives, including: overarching guidance documents, a Responsible AI Playbook, an internal Responsible AI Consultancy Service & Resolution Board, and the commissioning of AI audits via independent third parties. AI audits are a key part of any compliance structure, and are recommended in many frameworks. This enterprise model is a sort of ‘principles of AI principles’. AI Ethics: A Form of Governmental Competitive Differentiation In establishing governmental principles, Europe is a trailblazer. In September 2020, the EU completed its EAVA ethical AI framework . The key conclusion: by exploiting a first-mover advantage, a common EU approach to ethical aspects of AI has the potential to generate up to €294.9 billion in additional GDP and 4.6 million additional jobs for the European Union by 2030. Governments can feel FOMO too. The framework emphasizes that existing values, norms, principles and rules are about governing the action of humans and groups of humans as the key source of danger, not designed for algorithms. The EU warned “the technological nature of AI systems, and their upcoming features and applications could seriously affect how governments address four ethical principles: respect for human autonomy, prevention of harm, fairness, explicability.” Literally every government is adopting some form of ethical AI framework. The 2018 German AI strategy contains three commitments: make the country a global leader in AI, protect and defend responsible AI, and integrate AI in society while following ethical, legal, cultural and institutional provisions. Similarly, the 2019 Danish national strategy for artificial intelligence includes six principles for ethical AI: self-determination, dignity, responsibility, explainability, equality and justice, and development. It also provides for the establishment of a national Data Ethics Council. In 2021, the US launched the National Artificial Intelligence Initiative to ensure US leadership in the development and use of trustworthy AI. In 2022, the White House Office of Science and Technology Policy issued the Blueprint for an AI Bill of Rights.This June, the European Parliament passed the European Artificial Intelligence Act , which not only regulates commercial use of AI, but sets principles addressing government use of AI (e.g., limiting national surveillance technology). But What About Military AI? In most dystopian AI fiction, military AI takes over. We’re especially worried about Colossus, Skynet and Ultron, the most evil AI presented in film. In real life, most nations provide for separate governance of AI for defense and security. In 2020, the US Department of Defense, Joint Artificial Intelligence Center, adopted AI Ethical Principles for governance of combat and non-combat AI. The five principles are that AI is responsible, equitable, traceable, reliable and governable.
a person is holding a green leaf in their hand .
By Supreet Manchanda 20 Sep, 2022
As the United Nations General Assembly (UNGA) debates opens in New York City with the theme of “A Watershed Moment,” the world faces unprecedented and interconnected crises: a tipping point for climate change, the global pandemic, war in Ukraine, and runaway inflation. Despite these seemingly intractable problems, the UN argues that there are transformative solutions to these crises. We wholeheartedly agree. As investors, it is our job – a critical one – to seek innovations that solve real-world problems, especially in terms of environmental, social and governance ( ESG ) goals. Several of our investments actively tackle issues that are high on the agenda of the UN’s Sustainable Development Goals . With this in mind, it is timely that we give a progress report in this mid-investment period: much of it exceeds our expectations, especially within ESG. As a firm, we actively support women . Four out of eight of our venture partners are women, and several of our companies are female-founded and led: Scopio , Whizmo and Vertical Harvest . A few high notes on our investments: Vertical Harvest (Jackson, Wyoming) Vertical Harvest actively works to promote sustainable agriculture that is local, and uses 90 percent less water and land. Consider the negative environmental impact of industrial agriculture and how climate change is impacting agricultural prices , and it is clear why farm efficiency is needed. Vertical Harvest’s urban vertical farms use AI and IoT to improve yields and build smarter cities in locations across the US, starting with its first farm in Jackson, Wyoming and planned expansion across the US. A whopping 100,000 pounds of produce are produced per year from only a tenth-of-an-acre plot of land and its employment scheme provides a model for the industry, as Vertical Harvest’s staff consists of 40% neurodiverse employees. The company was recently profiled on CBS This Morning , and was nominated by Fast Company as one of the Best Workplaces for Innovators . Elevated Signals (Vancouver) Elevated Signals is an AI-driven enterprise software platform that radically streamlines controlled environment agriculture (CEA) operations from seed to sale, increasing the environmental impact of CEA companies. Consider the impact better connectivity has on agriculture: a McKinsey report which chronicles the ways improvements in technology can yield massive agricultural growth: “Artificial intelligence, analytics, connected sensors, and other emerging technologies could further increase yields, improve the efficiency of water and other inputs, and build sustainability.” The report also states that greater connectivity would yield $500 billion in additional value in terms of global gross domestic product by 2030, creating efficiencies that would alleviate pressure on farmers. WayOut (Stockholm, Sweden) Wayout is an IoT enabled water purification technology system that fits within a shipping container, can be rapidly deployed almost anywhere on Earth and provides 3,000 people every day with perfect drinking and cooking water. Wayout leads the way in terms of providing a direct method of disrupting obsolete and aging infrastructure to serve the needs of people in remote areas. With IP complete, deployed systems and orders in place, and partners such as SIEMENS, Alfa Laval and Ericsson, it will meet orders globally. Profiled in WIRED magazine, WARP news, Forbes , the company aims to rid the world or water scarcity and stress, a massive problem to tackle, considering 1.1 billion people lack access to clean water, and much of the world still suffers from water-borne illness, such as cholera and typhoid fever. Whizmo (Canada/UAE/Costa Rica) A peer-to-peer mobile money platform that empowers the unbanked to receive, transfer, remit and pay value without having a bank account in emerging economies. Whizmo is like m-pesa for emerging economies such as Dubai and Costa Rica. Whizmo saves a great deal of time and money for people who otherwise wait in long lines to receive money, make remittances, or have to pay high fees. Consider the opportunities of financial inclusion , and its ability to lift billions out of poverty – especially women – as 1.4 billion adults have no access to banking according to the World Bank. Scopio (Los Angeles and New York) Female founders Christina Hawatmeh and Nour Chamoun are making waves in the creator economy, with their company Scopio, aka Scope it out. Allowing photographers from anywhere to sell their photos everywhere, it makes images – NFT, photos, and art – more accessible and diverse. The company’s platform empowers artists and creators in far-flung corners globally and Scopio recently published a book featured in Entrepreneur: The Year Time Stopped with HarperCollins. CEO Christina Hawatmeh was listed in the top 15 Entrepreneurs to follow in 2021 by New York Finance and co-founder Nour Chamoun was featured in the Forbes 30 Under 30.
three men are posing for a picture in front of a toronto sign .
By Supreet Manchanda 08 Sep, 2022
At Raiven, we are happy to share that we had a great summer. It is also time to roll up our sleeves and get back to work. We’ve been on the move: traveling, meeting founders, investors, and seeking the latest technologies. This post is a recap – of our Nordic trip, where we learned about the latest in food and AgTech, and also our participation in the Collision Conference in Toronto. We also have some new investments that we are excited to share with you. NORDIC STOP: PASSION FOR FOODTECH We were delighted to spend time with Raiven LP and Advisory Group member Björn Öste in Stockholm and Lausanne. It goes without saying that he is a visionary foodtech entrepreneur and co-founder of Oatly, which went public last year. We also participated in two major conferences: The Future of Food Summit and Stockholm TECH Live 2022 . Björn gave the keynote presentation at FoodHack in Lausanne. His insights were inspiring: He spoke of the journey of Oatly and gave the crowd tips on navigating the food sector, strengthening the resolve of many founders on their entrepreneurial journey. Founders remarked that Björn’s talk gave muchto think about. He offered ideas for creating new products even when no market exists, noting that an IPO or acquisition may be tough to imagine for founders in the food space.
a woman wearing a mask is sitting in front of a laptop computer .
By Supreet Manchanda 16 Aug, 2022
RAIVEN LEADS RESEARCH ON THE FUTURE OF WORK
Portrait of Björn Öste. Photo credit: Kathryn Costello
By Supreet Manchanda 04 Jun, 2022
OATLY CO-FOUNDER INVESTS IN RAIVEN CAPITAL
a black and white photo of a bridge over a body of water
By Supreet Manchanda 03 Jun, 2022
Technology innovation knows no bounds. Neither do we.
Share by: