top of page
Search
Writer's pictureLSOU Publications

How Can the Canadian Legislature Move Forward with its Growing AI Market?

Christian Cho | Nov 14th, 2021



A few years back, AI came across as “futuristic” and was one of the common themes for movies portraying the future. In 2021, AI is now prevalent in our lives and is optimized in different capacities as a way to use modern-day technology. There are ongoing funding and business initiatives that endeavour to help the Canadian AI market become one of the best in the world. Worth over 8 billion dollars, the Canadian AI market is rapidly growing as more money is being placed in AI R&D, attracting global talent and fierce competition. However, there are loopholes in the tort law that may prevent helping every party involved when using AI in Canada.


These loopholes in our law are natural and reasonable since the law has been around much longer than the idea of AI. Before getting into the law and the specific sections that require revision to adapt to the new digital world, I would like to introduce GPT-3, a project developed by Open AI, and co-founded and partially funded by Elon Musk. In brief, GPT-3 is an AI that can produce content in language texts through deep learning. From writing emails to building a search engine, it has been developed and tested to provide service in diverse areas. While many praised the works and the development of Open AI’s GPT-3, some cases concerned many. The head of Facebook AI Lab expressed that GPT-3 is "unsafe" because when it was asked to discuss women, Black people, Jews, and the Holocaust, it produced sexist, racist, and harmful langauge (1). During a test by Nabla, a French start-up, there were problematic outcomes when simulating a chat between the AI and a patient. It responded to a patient with mental illness to continue their suicidal ideation (2).


If AI happens to cause harm to its user, what legal liability will the creator of the AI be held for? According to Global Legal Insights, under tort law, a party may be liable to another party for injury due to the first party’s negligence with respect to the goods or services they provided (3). AI consists of complex algorithms operating with the user’s input; therefore, it is difficult to prove that one particular party is directly negligent for the damage. There has only been a small number of times where tort law applied to the AI, meaning there are customers and users of the AI that could not exercise their rights as it was beyond the reach of the law. Hence, revisioning any law that requires a change in a way that can adapt to this rapidly changing world is necessary to protect the citizens from a new creature that potentially holds power to change our livelihood.








56 views0 comments

Comments


bottom of page