Publications

Packed with valuable information, our publications help you stay in touch with the latest developments in the fields of law affecting you, whatever your sector of activity. Our professionals are committed to keeping you informed of breaking legal news through their analysis of recent judgments, amendments, laws, and regulations.

Advanced search
  • Crypto asset works of art and non-fungible token (NFT) investments: Be careful!

    On March 11, 2021, Christie’s auction house made a landmark sale by auctioning off an entirely digital artwork by the artist Beeple, a $69 million transaction in Ether, a cryptocurrency.1 In doing so, the famous auction house put non-fungible tokens (“NFT”), the product of a decentralized blockchain, in the spotlight. While many extol the benefits of such crypto asset technology, there are also significant risks associated with it,2 requiring greater vigilance when dealing with any investment or transaction involving NFTs. What is an NFT? The distinction between fungible and non-fungible assets is not new. Prior to the invention of blockchain, the distinction was used to differentiate assets based on their availability, fungible assets being highly available and non-fungible assets, scarce. Thus, a fungible asset can easily be replaced by an equivalent asset with the same market value. The best example is money, whether it be coins, notes, deposit money or digital money, such as Bitcoin. On the contrary, a non-fungible asset is unique and irreplaceable. As such, works of art are non-fungible assets in that they are either unique or very few copies of them exist. Their value is a result of their authenticity and provenance, among other things. NFTs are crypto assets associated with blockchain technology that replicate the phenomenon of scarcity. Each NFT is associated with a unique identifier to ensure traceability. In addition to the art market, online, NFTs have been associated with the collection of virtual items, such as sports cards and other memorabilia and collectibles, including the first tweet ever written.3 NFTs can also be associated with tangible goods, in which case they can be used to track exchanges and transactions related to such goods. In 2019, Ernst & Young developed a system of unique digital identifiers for a client to track and manage its collection of fine wines.4 Many projects rely on cryptocurrencies, such as Ether, to create NFTs. This type of cryptocurrency is programmable and allows for metadata to be embedded through a code that becomes the key to tracking assets, such as works of art or other valuables. What are the risks associated with NFTs? Although many praise the benefits of NFTs, in particular the increased traceability of the origin of goods exchanged through digital transactions, it has become clear that the speculative bubble of the past few weeks has, contrary to expectations, resulted in new opportunities for fraud and abuse of the rights associated with works exchanged online. An unregulated market? While there is currently no legislative framework that specifically regulates crypto asset transactions, NFT buyers and sellers are still subject to the laws and regulations currently governing the distribution of financial products and services5, the securities laws6, the Money-Services Business Act7 and the tax laws8. Is an NFT a security? In January 2020, the Canadian Securities Administrators (CSA) identified crypto asset “commodities” as assets that may be subject to securities laws and regulations. Thus, platforms that manage and host NFTs on behalf of their users engage in activities that are governed by the laws that apply to securities trading, as long as they retain possession or control of NFTs. On the contrary, a platform will not be subject to regulatory oversight if: “the underlying crypto asset itself is not a security or derivative; and the contract or instrument for the purchase, sale or delivery of a crypto asset results in an obligation to make immediate delivery of the crypto asset, and is settled by the immediate delivery of the crypto asset to the Platform’s user according to the Platform’s typical commercial practice.”9 Fraud10 NFTs don’t protect collectors and investors from fraud and theft. Among the documented risks, there are fake websites robbing investors of their cryptocurrencies, thefts and/or disappearances of NFTs hosted on platforms, and copyright and trademark infringement. Theft and disappearance of NFT assets As some Nifty Gateway users unfortunately learned the hard way in late March, crypto asset platforms are not inherently immune to the hacking and theft of personal data associated with accounts, including credit card information. With the hacking of many Nifty Gateway accounts, some users have been robbed of their entire NFT collection.11 NFTs are designed to prevent a transaction that has been concluded between two parties from being reversed. Once the transfer of the NFT to another account has been initiated, the user, or a third party such as a bank, cannot reverse the transaction. Cybercrime targeting crypto assets is not in its infancy—similar schemes have been seen in thefts of the cryptocurrency Ether. Copyright infringement and theft of artwork images The use of NFTs makes it possible to identify three types of problems that could lead to property right and copyright infringement: It is possible to create more than one NFT for the same work of art or collectible, thus generating separate chains of ownership. NFTs can be created for works that already exist and are not owned by the person marketing them. There are no mechanisms to verify copyrights and property rights associated with transacted NFTs. This creates false chains of ownership. The authenticity of the original depends too heavily on URLs that are vulnerable and could eventually disappear.12 For the time being, these problems have yet to be addressed by both the various platforms and the other parties involved in NFT transactions, including art galleries. Thus, the risks are borne solely by the buyer. This situation calls for increased accountability for platforms and others involved in transactions. The authenticity of the NFTs traded must be verified, as should the identity of the parties involved in a transaction. Money laundering and proceeds of crime In September 2020, the Financial Action Task Force (FATF)13 published a report regarding the main risks associated with virtual assets and with platforms offering services relating to such virtual assets. In particular, FATF pointed out that money laundering and other types of illicit activity financing are facilitated by virtual assets, which are more conducive to rapid cross-border transactions in decentralized markets that are not regulated by national authorities;14 that is, the online marketplaces where cryptocurrencies and decentralized assets are traded on blockchains. Among other things, FATF pointed to the anonymity of the parties to transactions as a factor that increases risk. Considering all the risks associated with NFTs, we recommend taking the utmost precaution before investing in this category of crypto assets. In fact, on April 23, 2021, the Autorité des marchés financiers reiterated its warning about the “inordinately high risks” associated with investments involving cryptocurrencies and crypto assets.15 The best practices to implement prior to any transactions are: obtaining evidence identifying the party you are transacting with, if possible, safeguarding your crypto assets yourself, and checking with regulatory bodies to ensure that the platform on which the exchange will take place is compliant with applicable laws and regulations regarding the issuance of securities and derivatives. https://onlineonly.christies.com/s/beeple-first-5000-days/lots/2020 On April 23, 2021, the Autorité des marchés financiers reiterated its warnings about issuing tokens and investing in crypto assets. https://lautorite.qc.ca/en/general-public/media-centre/news/fiche-dactualites/amf-warns-about-the-risks-associated-with-crypto-assets https://www.reuters.com/article/us-twitter-dorsey-nft-idUSKBN2BE2KJ https://www.ey.com/en_gl/news/2019/08/ey-helps-wiv-technology-accelerate-fine-wine-investing-with-blockchain Act respecting the regulation of the financial sector, CQLR, c. E-6.1; Act respecting the distribution of financial products and services, CQLR, c. D-9.2. Securities Act, CQLR., c. V-1.1; see also the regulatory sandbox produced by the CSA: https://www.securities-administrators.ca/industry_resources.aspx?ID=1715&LangType=1033 CQLR, c. E-12.000001 https://www.canada.ca/en/revenue-agency/programs/about-canada-revenue-agency-cra/compliance/digital-currency/cryptocurrency-guide.html; https://www.revenuquebec.ca/en/fair-for-all/helping-you-meet-your-obligations/virtual-currency/reporting-virtual-currency-income/ https://lautorite.qc.ca/fileadmin/lautorite/reglementation/valeurs-mobilieres/0-avis-acvm-staff/2020/2020janv16-21-327-avis-acvm-en.pdf https://www.telegraph.co.uk/technology/2021/03/15/crypto-art-market-infiltrated-fakes-thieves-scammers/ https://www.coindesk.com/nifty-gateway-nft-hack-lessons; https://news.artnet.com/opinion/nifty-gateway-nft-hack-gray-market-1953549 https://blog.malwarebytes.com/explained/2021/03/nfts-explained-daylight-robbery-on-the-blockchain/ FATF is an independent international body that assesses the risks associated with money laundering and the financing of both terrorist activities and the proliferation of weapons of mass destruction. https://www.fatf-gafi.org/media/fatf/documents/recommendations/Virtual-Assets-Red-Flag-Indicators.pdf, p. 1. https://lautorite.qc.ca/en/general-public/media-centre/news/fiche-dactualites/amf-warns-about-the-risks-associated-with-crypto-assets

    Read more
  • Artificial intelligence soon to be regulated in Canada?

    For the time being, there are no specific laws governing the use of artificial intelligence in Canada. Certainly, the laws on the use of personal information and those that prohibit discrimination still apply, no matter if the technologies involved are so-called artificial intelligence technologies or conventional ones. However, the application of such laws to artificial intelligence raises a number of questions, especially when dealing with “artificial neural networks,” because the opacity of the algorithms behind these makes it difficult for those affected to understand the decision-making mechanisms at work. Such artificial neural networks are different in that they provide only limited explanations as to their internal operation. On November 12, 2020, the Office of the Privacy Commissioner of Canada (OPC) published its recommendations for a regulatory framework for artificial intelligence.1 Pointing out that the use of artificial intelligence requiring personal information can have serious privacy implications, the OPC has made several recommendations, which involve the creation of the following, in particular: A requirement for those who develop such systems to ensure that privacy is protected in the design of artificial intelligence systems; A right for individuals to obtain an explanation, in understandable terms, to help them understand decisions made about them by an artificial intelligence system, which would also involve the assurance that such explanations are based on accurate information and are not discriminatory or biased; A right to contest decisions resulting from automated decision making; A right for the regulator to require evidence of the above. It should be noted that these recommendations include the possibility of imposing financial penalties on companies that would fail to abide by this regulatory framework. Moreover, contrary to the approach adopted in the General Data Protection Regulation and the Government of Quebec’s Bill 64, the rights to explanation and contestation would not be limited solely to automated decisions, but would also cover cases where an artificial intelligence system assists a human decision-maker. It is likely that these proposals will eventually provide a framework for the operation of intelligence systems already under development. It would thus be prudent for designers to take these recommendations into account and incorporate them into their artificial intelligence system development parameters as of now. Should these recommendations be adopted, it will also become necessary to consider how to explain the mechanisms behind the systems making or suggesting decisions based on artificial intelligence. As mentioned in these recommendations, “while trade secrets may require organizations to be careful with the explanations they provide, some form of meaningful explanation should always be possible without compromising intellectual property.”2 For this reason, it may be crucial to involve lawyers specializing in these matters from the start when designing solutions that use artificial intelligence and personal information. https://www.priv.gc.ca/en/about-the-opc/what-we-do/consultations/completed-consultations/consultation-ai/reg-fw_202011/ Ibid.

    Read more
  • Use of patents in artificial intelligence: What does the new CIPO report say?

    Artificial intelligence is one of the areas of technology where there is currently the most research and development in Canada. To preserve Canada's advantageous position in this area, it is important to consider all forms of intellectual property protection that may apply. Although copyright has historically been the preferred form of intellectual property in computer science, patents are nevertheless very useful in the field of artificial intelligence. The monopoly they grant can be an important incentive to foster innovation. This is why the Canadian Intellectual Property Office (CIPO) felt the need to report on the state of artificial intelligence and patents in Canada. In its report titled Processing Artificial Intelligence: Highlighting the Canadian Patent Landscape published in October 2020, CIPO presents statistics that clearly demonstrate the upward trend in patent activity by Canadian researchers in the area of artificial intelligence. However, this increase remains much less marked than those observed in the United States and China, the champions in the field. Nevertheless, Canada ranked sixth in the world in the number of patented inventions attributed to Canadian researchers and institutions. International patent activity in AI between 1998 et 2017 Reproduced with the permission of the Minister of Industry, 2020   International patent activity by assignee's country of origin in AI between 1998 and 2017 Reproduced with the permission of the Minister of Industry, 2020   Canadian researchers are particularly specialized in natural language processing, which is not surprising for a bilingual country. But their strengths also lie in knowledge representation and reasoning, and in computer vision and robotics. We can also see that, generally speaking, the most active areas of application for artificial intelligence in Canada are in life sciences and medicine and computer networks, followed by energy management, in particular. This seems to be a natural fit for Canada, a country with well-developed healthcare systems and telecommunications and energy infrastructure that reflects its vast territory. The only shortcoming is the lack of representation of women in artificial intelligence patent applications in Canada. This is an important long-term issue, since maintaining the country's competitiveness will necessarily require ensuring that all the best talent is involved in the development of artificial intelligence technology in Canada. Regardless of which of these fields you work in, it may be important to consult a patent agent early in the invention process, particularly to ensure optimal protection of your inventions and to maximize the benefits for Canadian institutions and businesses. Please do not hesitate to contact a member of our team!

    Read more
  • The Unforeseen Benefits of Driverless Transport during a Pandemic

    The COVID-19 pandemic has been not only causing major social upheaval but disrupting business development and the economy as well. Nevertheless, since last March, we have seen many developments and new projects involving self-driving vehicles (SDV). Here is an overview. Distancing made easy thanks to contactless delivery In mid-April 2020, General Motors’ Cruise SDVs were dispatched to assist two food banks in the delivery of nearly 4,000 meals in eight days in the San Francisco Bay Area. Deliveries were made with two volunteer drivers overseeing the operation of the Level 3 SDVs. Rob Grant, Vice President of Global Government Affairs at Cruise, commented on the usefulness of SDVs: “What I do see is this pandemic really showing where self-driving vehicles can be of use in the future.  That includes in contactless delivery like we’re doing here.”1 Also in California in April, SDVs operated by the start-up Nuro Inc. were made available to transport medical equipment in San Mateo County and Sacramento.  Toyota Pony SDVs were, for their part, used to deliver meals to local shelters in the city of Fremont, California.  Innovation: The first Level 4 driverless vehicle service In July 2020, Navya Group successfully implemented a Level 4 self-driving vehicles service on a closed site. Launched in partnership with Groupe Keolis, the service has been transporting visitors and athletes on the site of the National Shooting Sports Centre in Châteauroux, France, from the parking lot to the reception area.  This is a great step forward—it is the first trial of a level 4 vehicle, meaning that it is fully automated and does not require a human driver in the vehicle itself to control it should a critical situation occur. Driverless buses and dedicated lanes in the coming years In August 2020, the state of Michigan announced that it would take active steps to create dedicated road lanes exclusively for SDVs on a 65 km stretch of highway between Detroit and Ann Arbour.  This initiative will begin with a study to be conducted over the next three years. One of the goals of this ambitious project is to have driverless buses operating in the corridor connecting the University of Michigan and the Detroit Metropolitan Airport in downtown Detroit. In September 2020, the first SDV circuit in Japan was inaugurated at Tokyo’s Haneda Airport. The regular route travels 700 metres through the airport.  A tragedy to remind us that exercising caution is key  On March 18, 2018, in Tempe, Arizona, a pedestrian was killed in a collision with a Volvo SUV operated by an Uber Technologies automated driving system that was being tested. The vehicle involved in the accident, which was being fine-tuned, corresponded to a Level 3 SDV under SAE International Standard J3016, requiring a human driver to remain alert at all times in order to take control of the vehicle in a critical situation. The investigation by the National Transportation Safety Board determined that the vehicle’s automated driving system had detected the pedestrian, but was unable to classify her as such and thus predict her path. In addition, video footage of the driver inside the SDV showed that she did not have her eyes on the road at the time of the accident, but rather was looking at her cell phone on the vehicle’s console. In September 2020, the authorities indicted the driver of the vehicle and charged her with negligent homicide. The driver pleaded not guilty and the pre-trial conference will be held in late October 2020.  We will keep you informed of developments in this case.   In all sectors of the economy, including the transportation industry and more specifically the self-driving vehicles industry, projects have been put on hold because of the ongoing COVID-19 pandemic. Nevertheless, many projects that have been introduced, such as contactless delivery projects, are now more important than ever. Apart from the Navya Group project, which involves Level 4 vehicles, all the initiatives mentioned concern Level 3 vehicles. These vehicles, which are allowed on Quebec roads, must always have a human driver present. The recent charges against the inattentive driver in Arizona serve as a reminder to all drivers of Level 3 SDVs that regardless of the context of an accident, they may be held liable. The implementation of SDVs around the world is slow, but steadily gaining ground. A number of projects will soon be rolled out, including in Quebec. As such initiatives grow in number, SDVs will become more socially acceptable, and seeing these vehicles as something normal on our roads is right around the corner.   Financial Post, April 29, 2020, “Self-driving vehicles get in on the delivery scene amid COVID-19,”.

    Read more
  • Artificial Intelligence and Telework: Security Measures to be Taken

    Cybersecurity will generally be a significant issue for businesses in the years to come. With teleworking, cloud computing and the advent of artificial intelligence, large amounts of data are likely to fall prey to hackers attracted by the personal information or trade secrets contained therein. From a legal standpoint, businesses have a duty to take reasonable steps to protect the personal information they hold.1 Although the legal framework doesn’t always specify what such reasonable means are in terms of technology, measures appropriate for the personal information in question must nevertheless be applied. These measures must also be assessed in light of the evolution of threats to IT systems. Some jurisdictions, such as Europe, go further and require that IT solutions incorporate security measures by design.2 In the United States, with respect to medical information, there are numerous guidelines on the technical means to be adopted to ensure that such information is kept secure.3 In addition to the personal information they hold, companies may also want to protect their trade secrets. These are often invaluable and their disclosure to competitors could cause them irreparable harm. No technology is immune. In a recent publication,4 the renowned Kaspersky firm warns us of the growing risks posed by certain organized hacker groups that may want to exploit the weaknesses of Linux operating systems, despite their reputation as highly secure. Kaspersky lists a number of known vulnerabilities that can be used for ransom attacks or to gain access to privileged information. The publication echoes the warnings issued by the FBI regarding the discovery of new malware targeting Linux.5 Measures to be taken to manage the risk It is thus important to take appropriate measures to reduce these risks. We recommended in particular that business directors and officers: Adopt corporate policies that prevent the installation of unsafe software by users; Adopt policies for the regular review and updating of IT security measures; Have penetration tests and audits conducted to check system security; Ensure that at least one person in management is responsible for IT security. Should an intrusion occur, or, as a precautionary measure for businesses that collect and store sensitive personal information, consulting a lawyer specializing in personal information or trade secrets is recommended in order to fully understand the legal issues involved in such matters.   See in particular: Act respecting the protection of personal information in the private sector (Quebec), s. 10, Personal Information Protection and Electronic Documents Act (Canada), s. 3. General Data Protection Regulation, art. 25. Security Rule, under the Health Insurance Portability and Accountability Act, 45 CFR Part 160, 164. https://securelist.com/an-overview-of-targeted-attacks-and-apts-on-linux/98440/ https://www.fbi.gov/news/pressrel/press-releases/nsa-and-fbi-expose-russian-previously-undisclosed-malware-drovorub-in-cybersecurity-advisory

    Read more
  • Improving Cybersecurity with Machine Learning and Artificial Intelligence

    New challenges The appearance of COVID-19 disrupted the operations of many companies. Some had to initiate work from home. Others were forced to quickly set up online services. This accelerated transition has made cybersecurity vitally important, particularly considering the personal information and trade secrets that might be accidentally disclosed. Cybersecurity risks can stem not only from hackers, but also from software configuration errors and negligent users. One of the best strategies for managing cybersecurity risks is to try to find weak spots in the system before an attack occurs, by conducting a penetration test, for example. This type of testing has really evolved over the past few years, going from targeted trial and error to larger and more systematic approaches. What machine learning can bring to companies Machine learning, and artificial intelligence in general, is able to simulate human behaviour and can therefore function as a hypothetical negligent user or hacker for testing purposes. As a result, penetration tests involving artificial intelligence can be a good deal more effective. One example of relatively simple machine learning is Arachni: open-source software that assesses the security of web applications. It is one of the tools in the Kali Linux distribution, which is well-known for its penetration testing. Arachni uses a variety of advanced techniques, but it can also be trained to be more effective at discovering attack vectors-vulnerabilities where the applications are the most exposed.1 Many other cybersecurity software programs now have similar learning capabilities. Artificial intelligence can go even further. Possible uses for artificial intelligence in the cybersecurity field include2: A faster reaction time during malware attacks More effective detection of phishing attempts A contextualized understanding of abnormal user behaviour IBM has recently created a document explaining how its QRadar suite, which incorporates artificial intelligence, can reduce managers’ cybersecurity burden.3 What it means: Human beings remain central to cybersecurity issues. Managers must not only understand those issues, including the ones created by artificial intelligence, but they must also give users clear directives and ensure compliance. When considering which cybersecurity measures to impose on users, it is important for IT managers to be aware of the legal concerns involved: Avoid overly intrusive or constant employee surveillance. It may be wise to consult a lawyer with experience in labour law to ensure that the cybersecurity measures are compatible with applicable laws. It is important to understand the legal ramifications of a data or security breach. Some personal information (such as medical data) is more sensitive, and the consequences of a security breach involving this type of information are more severe. It may be useful for those responsible for IT security to talk to a lawyer having experience in personal information laws. Finally, a company’s trade secrets sometimes require greater protective measures than other company information. It may be wise to include IT security measures in the company’s intellectual property strategy.   https://resources.infosecinstitute.com/web-application-testing-with-arachni/#gref https://www.zdnet.com/article/ai-is-changing-everything-about-cybersecurity-for-better-and-for-worse-heres-what-you-need-to-know/; https://towardsdatascience.com/cyber-security-ai-defined-explained-and-explored-79fd25c10bfa Beyond the Hype, AI in your SOC, published by IBM; see also: https://www.ibm.com/ca-en/marketplace/cognitive-security-analytics/resources

    Read more
  • Development of a legal definition of artificial intelligence: different countries, different approaches

    As our society begins to embrace artificial intelligence, many governments are having to deal with public concern as well as the ongoing push to harness these technologies for the public good. The reflection is well underway in many countries, but with varying results. The Office of the Privacy Commissioner of Canada is currently consulting with experts to make recommendations to Parliament, the purpose being to determine whether specific privacy rules should apply to artificial intelligence. In particular, should Canada adopt a set of rules similar to European rules (GDPR)? Another question raised in the process is the possibility of adopting measures similar to those proposed in the Algorithmic Accountability Act of 2019 bill introduced to the U.S. Congress, which would give the U.S. Federal Trade Commission the power to force companies to assess risks related to discrimination and data security for AI systems. The Commission d’accès à l’information du Québec is also conducting similar consultations. The Americans, in their approach, appear to also be working on securing their country’s position in the AI market. On August 9, 2019, the National Institute of Standards and Technology (NIST) released a draft government action plan in response to a Presidential Executive Order. Entitled U.S. LEADERSHIP IN AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools1, the plan calls for the development of new robust technologies to make AI solutions more reliable and standardized norms for such technologies. Meanwhile, on November 21, 2019, the Congressional Research Service published an updated version of its report entitled Artificial Intelligence and National Security2. It presents a reflection on the military applications of artificial intelligence, and, in particular, on the fact that various combat devices have the capacity to carry out lethal attacks autonomously. It also looks at ways to counter deep fakes, specifically by developing technology to uncover what could become a means of disinformation. The idea is thus to bank on technological progress to thwart misused technology. In Europe, further to consultations completed in May 2019, the Expert Group on Liability and New Technologies published a report for the European Commission entitled Liability for Artificial Intelligence3, which looks into liability laws that apply to such technology.  The group points out that, except for matters involving personal information (GDPR) and motor vehicles, the liability laws of member states aren’t standardized throughout Europe. One of its recommendations is to standardize such liability laws. In its view, comparable risks should be covered by similar liability laws4. Earlier, in January 2019, the Consultative Committee of the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data published its Guidelines on Artificial Intelligence and Data Protection5,whichincludes recommendations to comply with human rights conventions not only for lawmakers, but for developers, manufacturers and service providers using such technology as well. Even with these different approaches, one fundamental question remains: If special rules are to be adopted, to which technologies should they be applied? This is one of the main questions that the Office of the Privacy Commissioner of Canada is posing. In other words, what is artificial intelligence? The term is not clearly defined from a technological standpoint. It covers a multitude of technologies with diverse characteristics and operating modes. This is the first issue that lawmakers will have to address if they wish to develop a legal framework specific to AI. The document of the European expert group mentioned above gives us some points to consider that we believe to be relevant. In the group’s view, when qualifying a technology, the following factors should be taken into consideration: Its complexity; Its opacity; Its openness to interaction with other technologies; Its degree of autonomy; The predictability of its results; The degree to which it is data-driven; Its vulnerability to cyber attacks and risks. These factors help to identify, on a case-by-case basis, the risks inherent to different technologies. In general, we think it preferable to not adopt a rigid set of standards that apply to all technologies. We rather suggest identifying legislative goals in terms of characteristics that may be found in many different technologies. For example, some deep learning technologies use personal information, while others require little or no such information. They can, in some cases, make decisions on their own, while in others, they will only help to do so. Finally, some technologies are relatively transparent and others more opaque, due in part to technological or commercial constraints. For developers, it becomes important to properly label a potential technology in order to measure the risks its commercialization involves. More specifically, it may be important to consult with legal experts from different backgrounds to ensure that the technology in question isn’t completely incompatible with applicable laws or soon to be adopted ones in the various jurisdictions where it is to be rolled out.   https://www.nist.gov/system/files/documents/2019/08/10/ai_standards_fedengagement_plan_9aug2019.pdf https://fas.org/sgp/crs/natsec/R45178.pdf https://ec.europa.eu/transparency/regexpert/index.cfm?do=groupDetail.groupMeetingDoc&docid=36608 Ibid, p. 36. https://rm.coe.int/guidelines-on-artificial-intelligence-and-data-protection/168091f9d8

    Read more
  • What lessons can we take from the fatal accident in Arizona in 2018 involving an autonomous vehicle?

    On March 18, 2018, in Tempe, Arizona, a vehicle being operated by self-driving software which was under development, collided with a pedestrian, causing her death. Following this accident, the U.S. National Transportation Safety Board ("NTSB") conducted an investigation and, on November 19, 2019, issued its preliminary results and recommendations.1 The circumstances of the accident involving an autonomous car from Uber The autonomous vehicle ("AV"), a 2017 Volvo XC90, was equipped with an automated driving system being developed by Uber Technologies Inc. ("Uber"). At the time of the collision, the vehicle was travelling at a speed of approximately 72 km/h, and was completing the second portion of a predetermined route as part of a driving test. The pedestrian was struck while crossing the street outside the crosswalk. The NTSB's investigation found that the vehicle's automated driving system had detected the pedestrian, but was unable to qualify her as a pedestrian and predict her path. Further, the automated driving system prevented the activation of the vehicle's emergency braking system, relying instead on the intervention of the human driver on board to regain control of the vehicle in this critical situation. However, videos from inside the vehicle showed that the driver was not paying attention to the road, but was rather looking at her cell phone lying on the vehicle console. Since the collision between the pedestrian and the vehicle was imminent, the inattentive driver was unable to take control of the vehicle in time to prevent the accident and mitigate the damages. What are the causes of the accident? The NTSB issued several findings, including the following: Neither the driver's experience nor knowledge, her fatigue or mental faculties, or even the mechanical condition of the vehicle, were factors in the accident; An examination of the pedestrian showed the presence of drugs in her body which may have impaired her perception and judgment; Uber's automated driving system did not adequately anticipate its safety limitations, including its inability to identify the pedestrian and predict her path; The driver of the vehicle was distracted in the moments preceding the accident. Had she been attentive, she would have had enough time to see the pedestrian and take control of the vehicle to avoid the accident or mitigate its impact; Uber did not adequately recognize the risks of distraction of the drivers of its vehicles; Uber had removed the second driver from the vehicle during the tests, which had the effect of giving the sole remaining driver full responsibility for intervening in a critical situation, thereby reducing vehicle safety. The probable cause of the accident was found to be the driver's distraction and failure to take control of the AV in a critical situation. Additional factors were identified, including insufficient vehicle safety measures and driver monitoring, associated with deficiencies in the safety culture at Uber. The NTSB issued recommendations, including the following: Arizona should implement obligations for AV project developers regarding the risks associated with the inattentiveness of vehicle drivers which are aimed at preventing accidents and mitigating risks; The NTSB should require entities conducting projects involving AVs to submit a self-assessment report on the safety measures for their vehicles. Additionnaly, the NTSBshould set up a process for the assessment of these safely measures; Uber should implement a policy on the safety of its automated driving software. Can an identical tragedy related to autonomous vehicles occur in Quebec and Canada? Following the update to the Highway Safety Code in April 2018, level 3 AVs are now permitted to be driven in the province of Quebec when their sale is allowed in Canada. Driving of level 4 and 5 automated vehicles is permitted where it is expressly regulated in the context of a pilot project.2 According to SAE International Standard J3016, level 3 AVs are vehicles with so-called conditional automation, where active driving is automated, but require the human driver to remain attentive so that they can take control of the vehicle in a critical situation. Thus, the vehicle involved in the Arizona accident, although still in the development phase, corresponded to a level 3 AV. Level 3 AVs are now circulating fully legally on Quebec roads. In Canada, the Motor Vehicle Safety Act3 and the relevant regulations thereof govern “the manufacture and importation of motor vehicles and motor vehicle equipment to reduce the risk of death, injury and damage to property and the environment”. However, there is currently no provision specifically for the regulation of automated driving software or the risks associated with the inattention of Level 3 AV drivers. With the arrival of AVs in Canada, taking in consideration the recommendations of the NTSB and to ensure the safety of all, we believe the current framework would need to be improved to specifically address VA security measures.   National Transportation Safety Board, Public Meeting of November 19, 2019, “Collision Between Vehicle Controlled by Developmental Automated Driving System and Pedestrian”, Tempe, Arizona, March 18, 2019, HWY18MH010. Highway Safety Code, CQLR c. C-24.2, s. 492.8 and 633.1; the driving of autonomous vehicules in Ontario is regulated by Pilot Project - Automated Vehicles, O Reg 306/15. Motor Vehicle Safety Act, S.C. 1993, c. 16; see, in particular, the Motor Vehicle Safety Regulations, C.R.C., c. 1038.

    Read more
  • Neural Network and Liability: When the information lies in Hidden Layers

    Many of the most advanced machine-learning techniques rely on artificial neural networks, which allow systems to "learn" tasks by considering examples, without being programmed specifically to perform those tasks. Neural networks are nothing new, however the emergence of deep learning1, and of computers able to rapidly manipulate large amounts of data, have led to the development of a myriad of solutions incorporating machine learning for various aspects of life. From image recognition to financial data processing, machine learning is becoming ubiquitous. From a mathematical perspective, modern neural networks almost always incorporate what are known as “hidden layers”, which process information between the input and output of a neural network system. Hidden layers’ nodes are not specifically assigned any task or weight by a human programmer, and typically there is no direct way of knowing how information is processed within them. In plain language, most of the current machine-learning techniques rely on methods which function in such a way that part of what is happening is not known by the human operators. For this reason, the systems that incorporate such methods will give rise to new legal challenges for lawyers. Scholars have been studying this issue for more than a decade now2, but have failed to provide definitive answers. Such questions are at the forefront of current legal debates. In a much-publicized case before the U.S. Supreme Court on gerrymandering3, machine learning was mentioned as a source of concern by the dissenting opinion. This is not surprising given that the lower courts were presented with evidence on Markov chain Monte Carlo algorithms4, which share this characteristic of not providing the human operator with a detailed explanation of how each data entered affects the results. In some jurisdictions, for example the United States, a technology user may be able to ward off requests for disclosure of the technology’s algorithms and the details on the machine-learning process by arguing that they are protected as trade secrets of the vendor of that technology5. Even then, it might still be necessary to disclose at least some information, such as the results of the machine-learning process for various situations to demonstrate its reliability and adequacy. Even such a defence may not be available in other jurisdictions. For example, in France, the Constitutional Council recently held that a public administration may rely on algorithmic processes in making decisions only if it is able to disclose, in detail and in an intelligible format, the way in which this algorithmic process makes its decisions6. From a computer-science standpoint, it is difficult to reconcile such requirements with the notion of hidden layers. More importantly, there might be cases in which a person may wish to disclose how they made a decision based on a machine-learning technology, in order to show that they acted properly. For instance, some professionals, such as in the field of health care, could be required to explain how they made a decision assisted by machine learning in order to avoid professional liability. A recent decision of the Court of Queen’s Bench of Alberta7 concerning the professional liability of physicians shows how such evidence can be complex. In that case, one of the factors involved in assessing the physicians’ liability was the fetal weight, and the different formulas that could have been used in determining it. The court made the following statement : “[…] the requisite expertise would concern the development of the algorithms used in the machine-based calculations of the composite birth weight as reflecting empirical research respecting actual birth weights and the variables or factors used to calculate composite birth weights. No individual or combination of individuals with such expertise testified. I draw no conclusions respecting the February ultrasound report calculations turning on different formulas and different weight estimates based on different formulas.” For developers and users of machine-learning technologies, it is therefore important at least to document the information used to train the algorithm, how the system was set up, and the reasoning followed in choosing the various technological methods used for the machine learning. Computer scientists who have developed applications for use in specific fields may wish to work closely with experts in those fields to ensure that the data used to train the algorithm is adequate and the resulting algorithm is reliable. In some cases, it may even be necessary to develop additional technologies to track the information traveling through the neural network and probe those hidden layers8. Things to remember The risks associated with the use of a system incorporating automatic learning must be assessed from the design stage. It is recommended to consult a lawyer at that time to properly guide the project. Where possible, technological choices should be directed towards robust approaches with results that are as stable as possible. It is important to document these technological choices and the information used when developing automatic learning algorithms. Contracts between technology developers and users must clearly allocate risks between the parties.   See, in particular: Rina Dechter (1986). Learning while searching in constraint-satisfaction problems. University of California, Computer Science Department, Cognitive Systems Laboratory, 1986.; LeCun, Yann; Bengio, Yoshua; Hinton, Geoffrey (2015). "Deep learning". Nature. 521 (7553): 436–444. For example: Matthias, Andreas. "The responsibility gap: Ascribing responsibility for the actions of learning automata." Ethics and information technology 6.3 (2004): 175-183; Singh, Jatinder, et al. "Responsibility & machine learning: Part of a process." Available at SSRN 2860048 (2016); Molnar, Petra, and Lex Gill. "Bots at the Gate: A Human Rights Analysis of Automated Decision-Making in Canada’s Immigration and Refugee System." (2018). Rucho v. Common Cause, No. 18-422, 588 U.S. ___ (2019). 279 F.Supp.3d 587 (2018). Houston Fed. of teachers v. Houston Independent, 251 F.Supp.3d 1168 (2017); Brennan Ctr. for Justice at New York Univ. Sch. of law v. New York City Police Dept. 2017 NY Slip Op 32716(U) (NY Supreme Court). Decision no. 2018-765 DC dated June 12, 2018 (Loi relative à la protection des données personnelles). DD v. Wong Estate, 2019 ABQB 171. For example: Graves, Alex, Greg Wayne, and Ivo Danihelka. Neural Turing Machines. arXiv:1410.5401, [cs.NE], 2014.

    Read more
  • Autonomous Air Vehicles : Are they at the gates of our cities?

    For many years now, we have been discussing the arrival of autonomous vehicles on Quebec roads. Thus, in April 2018, the government amended the Highway Safety Code1 to adapt it to the particularities of these new vehicles However, the automotive sector is not the only one being transformed by automation: the aeronautics industry is also undergoing profound changes, particularly with the introduction of autonomous air transport technologies in urban travel. Terminology There are many terms used in the autonomous air transport industry, including “autonomous flying car”, “unmanned air vehicle” and even “autonomous air taxi”. For its part, the International Civil Aviation Organization (ICAO) has proposed some terms that have been included in various official documents, including certain legislation2. These terms are as follows: Unmanned air vehicle: A power driven aircraft, other than a model aircraft that is designed to fly without a human operator on board; Unmanned air system: An unmanned aircraft and all of the associated support equipment, control station, data links, telemetry, communications and navigation equipment; Remote piloted aircraft system: A partially autonomous remotely piloted aircraft; Model aircraft (also called “drone”): A small aircraft, the total weight of which does not exceed 35 kg that is not designed to carry persons. As for Canadian legislation, it uses specific vocabulary and defines a remotely piloted aircraft system as a “a set of configurable elements consisting of a remotely piloted aircraft, its control station, the command and control links and any other system elements required during flight operation”, whereas a remotely piloted aircraft is defined as “a navigable aircraft, other than a balloon, rocket or kite, that is operated by a pilot who is not on board3”. Legislative Framework In accordance with Article 8 of the Convention on International Civil Aviation4, it is prohibited for unmanned aircraft to fly over the territory of a State without first obtaining the authorization of the State in question. In Canada, the standards governing civil aviation are found in the Aeronautics Act5 and its regulations. According to subsection 901.32 of the Canadian Aviation Regulations ((the “CARs”), “[n]o pilot shall operate an autonomous remotely piloted aircraft system or any other remotely piloted aircraft system for which they are unable to take immediate control of the aircraft6.” In Canada, the standards governing civil aviation are found in the Aeronautics Act5 and its regulations. According to subsection 901.32 of the Canadian Aviation Regulations ((the “CARs”), “[n]o pilot shall operate an autonomous remotely piloted aircraft system or any other remotely piloted aircraft system for which they are unable to take immediate control of the aircraft6.” Since the 2017 amendment of the CARs, it is now permitted to fly four (4) categories of aircraft ranging from “very small unmanned aircraft” to “larger unmanned aircraft7”, subject to certain legislative requirements: The use of unmanned aircraft weighing between 250 g and 25 kg is permitted upon passing a knowledge test or obtaining a pilot permit, if applicable8; To fly unmanned aircraft over 25 kg to transport passengers, it is mandatory to obtain an air operator certificate9. Ongoing projects Many projects developing unmanned aircraft are underway. The most high-profile and advanced projects are those of automotive, aeronautics and technology giants, including Airbus’s Vahana, Boeing’s NeXt program, Toyota’s SkyDrive and the Google-backed Kitty Hawk Cora10. The most advanced project appears to be UberAIR. In addition to actively working on developing such a vehicle with many partners like Bell and Thales Group, Uber’s project stands out by also focusing on all the marketing aspects thereof. The program is slated for launch in three cities as early as 202311. These cities are expected to host a test fleet of approximately fifty aircraft connecting five “skyports” in each city12. Challenges Despite the fact that technology seems to be advancing rapidly, many obstacles still remain to truly implement this means of transport in our cities, in particular the issue of the noise that these aircraft generate and the issues relative to their certification, costs and profitability, safety linked to their urban use, social acceptability and the establishment of the infrastructure necessary to operate them. In the event of an accident of an autonomous aerial vehicle, we can foresee that the manufacturers of such vehicles could be held liable, as could the subcontractors that are involved in manufacturing them, such as piloting software and flight computer manufacturers. We could therefore potentially be faced with complex litigation cases. Conclusion A study predicts that there will be about 15,000 air taxis by 2035 and that this industry will be worth more than $32 billion at that time13. In the context of climate change, sustainable transportation and in order to bear urban sprawl, these vehicles offer an interesting transit alternative that may very well change our daily habits. The flying car is finally at our doorsteps!   Highway Safety Code, CQLR, c C-24.2. Government of Canada, Office of the Privacy Commissioner of Canada, Drones in Canada, March 2013, at pp. 4-5 Canadian Aviation Regulations, SOR/96-433, s. 101.01. International Civil Aviation Organization (ICAO), Convention on International Civil Aviation (“Chicago Convention”), 7 December 1944, (1994) 15 U.N.T.S. 295. Aeronautics Act, RSC 1985, c. A-2. Canadian Aviation Regulations, SOR/96-433, s. 901.32. Government of Canada, Canada Gazette, Regulations Amending the Canadian Aviation Regulations (Unmanned Aircraft Systems) - Regulatory Impact Analysis Statement, July 15, 2017. Canadian Aviation Regulations, SOR/96-433, s. 901.64 et seq. Canadian Aviation Regulations, SOR/96-433, s. 700.01.1 et seq. Engineers Journal, The 13 engineers leading the way to flying car, May 29, 2018 Dallas, Los Angeles, and another city yet to be announced. Uber Elevate, Fast-Forwarding to a Future of On-Demand Urban Air Transportation, October 27, 2016, Porsche Consulting, “The Future of Vertical Mobility – Sizing the market for passenger, inspection, and goods services until 2035.” 2018

    Read more
  • Artificial intelligence: is your data well protected across borders?

    Cross-border deals are always challenging, but when related to AI technologies, such deas additionally involve substantial variations in terms of the rights granted in each jurisdiction. Looking at cross-border deals about Artificial Intelligence technologies therefore requires a careful analysis of these variations in order to properly assess the risks, but also to seize all available opportunities. Many AI technologies are based on neural networks and rely on large amounts of data to train the networks. The value of these technologies relies mostly on the ability to protect the intellectual property related to these technologies, which may lie, in some cases, in the innovative approach of such technology, in the work performed by the AI system itself and in the data required to train the system. Patents Given the pace of the developments in Artificial Intelligence, when a transaction is being negotiated, we are often working with patent applications, well before any patent is granted. That means we often have to assess whether or not these patent applications have any chance of being granted in different countries. Contrary to patent applications on more conventional technologies, in AI technologies one cannot take it for granted that an application that is acceptable in one country will lead to a patent in other countries. If we look at the US, the Alice1 decision of a few years ago had a major impact, resulting in many Artificial Intelligence applications being difficult to patent. Some issued AI-related patents have been declared invalid on the basis of this case. However, it is obvious from the patent applications that are now public that several large companies keep filing patent applications for AI-related technologies, and some of them are getting granted. Just across the border up north, in Canada, the situation is more nuanced. A few years ago, the courts said in the Amazon2 decision that computer implementations could be an essential element of a valid patent. We are still hoping for some specific decision on AI systems. In Europe, Article 52 of the European Patent Convention excludes "programs for computers". However, a patent may be granted if a “technical problem” is resolved by a non-obvious method3. There may be some limited potential for patents on Artificial Intelligence technologies there. The recently updated Guidelines for Examination of patent applications related to AI and machine learning), while warning that expressions such as "support vector machine", "reasoning engine" or "neural network" trigger a caution flag as typically referring to abstract models devoid of technical character, point out that applications of IA and ML do make technical contributions that are patentable, such as for example: The use of a neural network in a heart-monitoring apparatus for the purpose of identifying irregular heartbeats; or The classification of digital images, videos, audio or speech signals based on low-level features, such as for example edges or pixel attributes for images In contrast, classifying text documents solely based on their textual content is cited as not being regarded to be a technical purpose per se, but a linguistic one (T 1358/09). Classifying abstract data records or even "telecommunication network data records" without any indication of a technical use being made of the resulting classification is also given as an example of failing to be a technical purpose, even if the classification algorithm may be considered to have valuable mathematical properties such as robustness (T 1784/06). In Japan, according to examination guidelines, software-related patents can be granted for inventions “concretely realizing the information processing performed by the software by using hardware resources”4. It may be easier to get a patent on an AI system there. As you can appreciate, you may end up with variable results from country to country. Several industry giants, such as Google, Microsoft, IBM and Amazon keep filing applications for Artificial Intelligence and AI-related technologies. It remains to be seen how many, and which, will be granted, and ultimately which will be upheld in court. The best strategy for now may be to file applications for novel and non-obvious inventions with a sufficient level of technical detail and examples of concrete applications, in the event case law evolves such that Artificial Intelligence patents are indeed valid a few years down the road, at least in some countries. Judicial exceptions remain: Mathematical Concepts: mathematical relationships, mathematical formulas or equations, mathematical calculations; Certain methods of organizing human activity: fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviours; business relations); managing personal behaviour or relationships or interactions between people (including social activities, teaching, and following rules or instructions); and Mental processes: concepts performed in the human mind (including an observation, evaluation, judgment, opinion). Take-home message: patent applications on AI technology should identify a technical problem, provide a detailed technical description of specific implementations of the innovation that solve or mitigate the technical problem, and give examples of possible outcomes have a greater hope of getting allowed into a stronger patent. Setting the innovation within a specific industry or as related to specific circumstances and explaining the advantages over known existing systems and methods contributes to overcoming subject matter eligibility issues. Copyright From the copyright standpoint, we have also some difficulties, especially for the work created by an AI system. Copyright may protect original Artificial Intelligence software if it consists of “literary works” under the Copyright Act, including: computer source code, interface elements, a set of methods of communication for a database system, a web-based system, an operating system, or a software library. Copyright can cover data in a database if it complies with the definition of a compilation, thereby protecting the collection and assembling of data or other materials. There are two main difficulties in the recognition of copyright protection in AI creation: one relates to the machine-generated work that does not involve the input of human skill and judgment and the second concerns the concept of an author, which does not specifically exclude machine work but may eliminate it indirectly by way of section 5 of the Copyright Act, which indicates that copyright shall subsist in Canada in original work where the author was a citizen or resident of a treaty country at the time of creation of the work. Recently, we have seen Artificial Intelligence systems creating visual art and music. The artistic value of these creations may be disputed. However, the commercial value can be significant, for example if an AI creates the soundtrack to a movie. There are major research projects involving the use of AI technologies to write source code for some specific applications, for example in the gaming industry. Some jurisdictions do not provide copyright protection to work created by machines, like the US and Canada. In Canada, some recent case law specifically stated that for a work to be protected under the Copyright Act, you need a human author5. In the US, some may remember Naruto, the monkey that took a selfie. In the end, there was no copyright in the picture. While we are not sure how this will translate for Artificial Intelligence at this point, it is difficult to foresee that an AI system would have any such right if a monkey has none. Meanwhile, other countries, such as the UK, New Zealand and Ireland, have legal provisions whereby the programmer of the Artificial Intelligence technology will likely be the owner of the work created by the computer. These changes were not specifically made with AI in mind, but it is likely that the broad language that was used will apply. For example, in the UK, copyright is granted to “the person by whom the arrangements necessary for the creation of the work are undertaken”6. The work created by the system may have no protection at all in Canada, the US and several other jurisdictions, but be protected by copyrights in other places, at least until Canada and the US decide to address this issue by legislative changes. Trade secrets Trade secret protection covers any information that is secret and not part of the public domain. In order for it to remain confidential, a person must take measures, such as obtaining undertakings from third parties not to divulge the information. There are no time limits for this type of protection, and protection can be sought for machine-generated information. Data privacy Looking at data privacy, some legal scholars have mentioned that, if construed literally, the European GDPR are difficult to reconcile with some AI technologies. We just have to think about the right to erasure and the requirement for lawful processing (or lack of discrimination), which may be difficult to implement7. If we look into neural networks, they typically learn from datasets created by humans or by human training. Therefore, these networks often end up with the same bias as the persons who trained them, and sometimes with even more bias because what neural networks do is to find patterns. They may end up finding a pattern and optimizing a situation from a mathematical perspective while having some unacceptable racial or sexist bias, because they do not have “human” values. Furthermore, there are challenges when working on smaller datasets that allow reversing the “learning” process of the Artificial Intelligence, as it may lead to privacy leaks and trigger the right to remove specific data from the training of the neural network, which itself is technically difficult. One also has to take into account laws and regulations that are specific to some industries, for example HIIPA compliance in the US for health records, which includes privacy rules and technical safeguards8. Laws and regulations must be reconciled with local policies, such as those decided by government agencies and which need to be met in order to have access to some government data; for example, to access electronic health records in the Province of Quebec’s, where the authors are based. One of the challenges, in such cases, is to come up with practical solutions that comply with all applicable laws and regulations. In many cases, one will end up creating parallel systems if the technical requirements are not compatible from one country to another.   Alice Corp. v. CLS Bank International, 573 U.S., 134 S. Ct. 2347 (2014) Canada (Attorney General) v. Amazon.com, Inc., 2011 FCA 328 T 0469/03 (Clipboard formats VI/MICROSOFT) of 24.2.2006, European Patent Office, Boards of Appeal, 24 February 2006. Examination Guidelines for Invention for Specific Fields (Computer-Related Inventions), Japanese Patent Office, April 2005. Geophysical Service Incorporated v Encana Corporation, 2016 ABQB 230; 2017 ABCA 125; 2017 CanLII 80435 (SCC). Copyright, Designs and Patents Act, 1988, c. 48, § 9(3) (U.K.); see also Copyright Act 1994, § 5 (N.Z.); Copyright and Related Rights Act, 2000, Part I, § 2 (Act. No. 28/2000) (Irl.). General Data Protection Regulation, (EU) 2016/679, Art. 9 and 17. Health Insurance Portability and Accountability Act of 1996

    Read more
  • Open innovation: A shift to new intellectual property models?

    “The value of an idea lies in the using of it.” This was said by Thomas Edison, known as one of the most outstanding inventors of the last century. Though he fervently used intellectual property protections and filed more than 1,000 patents in his lifetime, Edison understood the importance of using his external contacts to foster innovation and pave the way for his inventions to yield their full potential. In particular, he worked with a network of experts to develop the first direct current electrical circuit, without which his light bulb invention would have been virtually useless. Open innovation refers to a mode of innovation that bucks the traditional research and development process, which normally takes place in secrecy within a company. A company that innovates openly will entrust part of the R&D processes for its products or services, or its research work, to external stakeholders, such as suppliers, customers, universities, competitors, etc. A more academic definition of open innovation, developed by Professor Henry Chesbrough at UC Berkeley, reads as follows: “Open innovation is the use of purposive inflows and outflows of knowledge to accelerate internal innovation, and expand the markets for external use of innovation, respectively.”1 Possible approaches: collaboration vs. competition A company wishing to use open innovation will have to decide which innovation "ecosystem" to join: should it favour membership in a collaborative community or a competitive market?             Joining a collaborative community In this case, intellectual property protections are limited and the object is more focused on developing knowledge through sharing. Many IT companies or consortia of universities join together in collaborative groups to develop skills and knowledge with a view to pursuing a common research goal.             Joining a competitive market In this case, intellectual property protections are robust and there is hardly any exchange of information. The ultimate goal is profit maximization. Unlike the collaborative approach, relationships translate into exclusivity agreements, technology sales and licensing.  This competitive approach is particularly pervasive in the field of video games, for example. Ownership of intellectual property rights as a requisite condition to use open innovation The success of open innovation lies primarily in the notion that sharing knowledge can be profitable. Secondly, a company has to strike a balance between what it can reveal to those involved (suppliers, competitors, specialized third-party companies, the public, etc.) and what it can gain from its relationships with them. It also has to anticipate its partners’ actions in order to control its risks before engaging in information sharing. At first glance, resorting to open innovation may seem to be an imprudent use of intellectual property assets. Intellectual property rights generally involve a monopoly attributed to the owner, allowing it to prevent third parties from copying the protected technology. However, studies have shown that the imitation of a technology by a competitor can be beneficial.2 Other research has also shown that a market with strong intellectual property protections increases the momentum of technological advances.3 Ownership of intellectual property rights is therefore a prerequisite for any company that innovates or wants to innovate openly. Because open innovation methods bring companies to rethink their R&D strategies, they also have to manage their intellectual property portfolios differently. However, a company has to keep in mind that it must properly manage its relations with the various external stakeholders it plans to do business with in order to avoid unwanted distribution of confidential information relating to its intellectual property, and, in turn, profit from this innovation method without giving up its rights. Where does one get innovation? In an open innovation approach, intellectual property can be brought into a company from an external source, or the transfer can occur the other way around. In the first scenario, a company will reduce its control over its research and development process and go elsewhere for intellectual property or expertise that it does not have in-house. In such a case, the product innovation process can be considerably accelerated by the contributions made by external partners, and can result in: The integration of technologies from specialized third-party partners into the product under development; The forging of strategic partnerships; The granting of licences to use a technology belonging to a third-party competitor or supplier to the company; The search for external ideas (research partnerships, consortia, idea competitions, etc.). In the second scenario, a company will make its intellectual property available to stakeholders in its external environment, particularly through licensing agreements with strategic partners or secondary market players. In this case, a company can even go so far as to make one of its technologies public, for example by publishing the code of software under an open-source license, or even assign its intellectual property rights for a technology that it owns, but for which it has no use. Some examples Examples of open innovation success stories are many. For example, Google made its automated learning tool Tensorflow available to the public under an open-source license (Apache 2.0) in 2015. As a result, Google allowed third-party developers to use and modify its technology’s code under the terms of the license while controlling the risk: any interesting discovery made externally could quickly be turned into a product by Google. This strategy, common in the IT field, has made it possible for the market to benefit from interesting technology and Google to position itself as a major player in the field of artificial intelligence. The example of SoftSoap liquid soap illustrates the ingenuity of American entrepreneur Robert Taylor, who developed and marketed his product without strong intellectual property protection by relying on external suppliers. In 1978, Taylor was the first to think of bottling liquid soap. In order for his invention to be feasible, he had to purchase plastic pumps from external manufacturers because his company had no expertise in manufacturing this component. These pumps were indispensable, because they had to be screwed onto the bottles to pump the soap. At that time, the patent on liquid soap had already been filed and Mr. Taylor’s invention could not be patented. To prevent his competitors from copying his invention, Taylor placed a $12 million order with the two sole plastic pump manufacturers. This had the effect of saturating the market for nearly 18 months, giving Mr. Taylor an edge over his competitors who were then unable to compete because of the lack of availability of soap pumps from manufacturers. ARM processors are a good example of the use of open innovation in a context of maximizing intellectual property. ARM Ltd. benefited from reduced control over the development and manufacturing process of tech giants such as Samsung and Apple, which are increasingly integrating externally developed technologies into their products. The particularity of ARM processors lies in their marketing method: ARM Ltd. does not sell its processors as finished processors fused in silicon. Rather, it grants licenses to independent manufacturers for them to use the architecture it has developed. This makes ARM Ltd. different from other processor manufacturers and has allowed it to gain a foothold in the IT parts supplier market, offering a highly flexible technology that can be adapted to various needs depending on the type of product (phone, tablet, calculator, etc.) in which the processor will be integrated. Conclusion The use of open innovation can help a company significantly accelerate its research and development process while limiting costs, either by using the intellectual property of others or sharing its own intellectual property. Although there is no magic formula, it is certain that to succeed in an open innovation process, a company must have a clear understanding of the competitors and partners it plans to collaborate with and manage its relations with its partners accordingly, so as to not jeopardize its intellectual property.   Henry Chesbrough, Win Vanhaverbeke and Joel West, Open Innovation: Researching a New Paradigm, Oxford University Press, 2006, p. 1 Silvana Krasteva, "Imperfect Patent Protection and Innovation," Department of Economics, Texas A&M University, December 23, 2012. Jennifer F. Reinganum, "A Dynamic Game of R and D: Patent Protection and Competitive Behavior,” Econometrica, The Econometric Society, Vol. 50, No. 3, May, 1982; Ryo Horii and Tatsuro Iwaisako, “Economic Growth with Imperfect Protection of Intellectual Property Rights,” Discussion Papers In Economics And Business, Graduate School of Economics and Osaka School of International Public Policy (OSIPP), Osaka University, Toyonaka, Osaka 560-0043, Japan.  

    Read more
  • Artificial intelligence at the lawyer’s service: is the dawn of the robot lawyer upon us?

    Over the past few months, our Legal Lab on Artificial Intelligence (L3AI) team has tested a number of legal solutions that incorporate AI to a greater or lesser extent. According to the authors Remus and Levy1, most of these tools will have a moderate potential impact on the legal practice. Among the solutions tested by the members of our laboratory, certain functionalities in particular drew our attention.  Historic context At the start of the 1950s, when Grace Murray Hopper, a pioneer of computer science, attempted to convince her colleagues to create a computer language using English words, she was told that it was impossible for a computer to be able to understand English. However, contrary to the engineers and mathematicians of the time, the business world was more receptive to the idea. Thus was born “Business Language version 0”, or B-0, the forerunner of a number of more modern computer languages and a first (small) step towards the processing of natural language. The fact remains that the use of IT for legal solutions was a challenge, specifically because of the nature of the information to be processed, which was often presented in text format and was not very organized. In 1986, author Richard Susskind was already addressing the use of artificial intelligence to process legal information2. It was not until recently, however, with advances in the natural language processing field, that we have seen the creation of software applications with the potential to substantially modify the practice of law. A number of lawyers and notaries are now concerned about the future of their profession. Are we witnessing the creation of the robot lawyer? Currently, the technological solutions available to legal practitioners make it possible to automate certain specific aspects related to the multitude of tasks they fulfill when they are doing their work. The tools for automating and analyzing documents are relevant examples in that they make it possible, on the one hand, to create legal documents from an existing model and, on the other, to identify certain elements that may be potentially problematic in the submitted documents.  However, no solution can claim to completely replace the legal practitioner. Recently, the above-mentioned authors Remus and Levy have analyzed and measured the impact of automation on the work of lawyers3. Generally speaking, they predict that only the document research process will be disrupted significantly by automation and that the tasks of managing files, drafting documents, conducting due diligence reviews and research and legal analysis will be slightly impacted. Moreover, they feel that the tasks of document management, legal drafting, consulting, negotiating, collating facts, preparation and representation before the court will only be slightly impacted by solutions integrating artificial intelligence4. Documentary analysis toolsKira, Diligen, Luminance, Contract Companion, LegalSifter, LawGeex, etc. First, among the tools making it possible to conduct documentary analysis, there are two types of solutions offered on the market. On the one hand, several use supervised and unsupervised learning techniques to sort and analyze a vast number of documents in order to draw certain specific information from them. This type of tool is particularly interesting in the context of a due diligence review. It makes it possible in particular to identify the object of a given contract as well as certain clauses, the applicable laws and other set items in order to detect certain elements of risk determined beforehand by the user. In this case, we could for example cite the existence of due diligence tools such as Kira, Diligen and Luminance5. On the other hand, certain solutions are designed to analyze and review contracts to facilitate negotiations with a third party. This type of tool uses natural language processing (NLP) in order to identify the specific terms and clauses of a contract. It also identifies the missing elements in a specific type of contract. For example, in a confidentiality agreement, the tool will notify the user if the concept of confidential information is not defined. Moreover, it provides comments regarding the various elements identified in order to provide guidance on negotiating the terms of the contract. These comments and guidelines can be modified based on the attorney’s preferred practices. These solutions are particularly useful when a legal professional is called on to advise a client on whether or not to comply with the terms of a contract tabled by a third party. The Contract Companion6 tool drew our attention because of the ease of use it provides, even if it is a tool that merely serves to assist a human drafting a contract without identifying problematic clauses and their content. Instead, it detects inconsistencies such as a missing definition for a capitalized term, among other examples. LegalSifter and LawGeex7 are presented as assistants to the negotiation process by proposing solutions that identify discrepancies between a submitted contract and the best practices favoured by the firm or company, thereby helping to outline and resolve any missing or problematic clauses. Legal research tools InnovationQ, NLPatent, etc. Recently, certain solutions that made it possible to conduct legal research and predict the outcome of court decisions have appeared on the market. Some companies propose simulating a ruling based on factual elements outlined in the context of a given legal system to help with the decision-making process. Accordingly, they make use of NLP to understand the questions asked by attorneys and to research the legislation, case law and doctrinal sources. Some of the solutions even make proposals to lawyers to determine their chances of winning or losing based on the given elements, such as the opposing party’s lawyer, the judge and the administrative level of the court. To do so, the tool uses machine learning. It asks questions about the client’s situation and then goes on to analyze thousands of similar cases upon which the courts have already passed judgment. Lastly, the artificial intelligence system formulates a prediction based on all of the cases analyzed, a personalized explanation and a list of relevant case law. With the advent of these tools, authors are anticipating significant changes in the types of lawsuits that will be brought before the courts. They predict that technology will enable the settlement of disputes and that judges will only have to rule on matters that give rise to the most complex of legal questions and that require concrete legal developments.8 In patent law, the search for existing inventions (“prior art” in the intellectual property lexicon) is facilitated by tools that call on NLP. Patent application drafting usually comprises a specialized vocabulary. The solutions make it possible to identify the target technology, determine the relevant prior art and analyze the related documents so as to identify the disclosed elements. In this regard, the InnovationQ and NLPatent9 tools seem to demonstrate interesting potential. Legal drafting toolsSpecif.io, etc. Some of the solutions available on the market call on the “creative” potential of artificial intelligence applied to the legal field. Among these, we are interested in a solution that is capable of drafting a specification in the context of a patent application. The Specif.io10 tool makes it possible to draft a description of the invention using vocabulary suited to the form required to draft patent applications, which is based on claims that briefly outline the scope of the invention.  For the time being, this solution is restricted to the field of software developments. Even if, most of the time and given the current stage of the product, the lawyer is called on to rework the text significantly, he or she can save a considerable amount of time when composing a first draft. Recommendations In conclusion, artificial intelligence tools are not all progressing in the same manner in every area of the law. A number of tools can already assist attorneys with various repetitive tasks or help them identify errors or potential risks in different documents. However, it is important to consider that such tools are still far off from having the human faculty of being able to contextualize their operations. In those cases where the information is organized and structured, such as in matters pertaining to patent applications, a domain in which databases are organized and accessible online for most Western nations, the automated tools make it possible to not only assist users in completing their tasks, but even to provide a first draft of a specification based on simple draft claims. However, research and development are still needed in this regard before we can truly rely on such solutions. Therefore, we feel it relevant to issue certain key recommendations to those attorneys seeking to integrate such AI tools into their everyday practice: Be aware of the possibilities and limits of an AI tool: when selecting an AI tool, it is important to run tests on it so as to assess its operational aspects and results. One must set a specific objective and ensure that the tool being tested can help achieve this objective. Human supervisions: to date, it is important for any AI tool to still be used with human supervision. This is not only an ethical obligation to ensure the quality of the services rendered, but also a simple rule of caution when using tools that do not have the capacity to contextualize the information submitted to them. Processing of ambiguities: several AI tools make it possible to vary their operational settings. Such setting variations make it so that the processing of any ambiguous situation is entrusted to the humans operating the AI tools. Data confidentiality: Remember that we are bound to uphold the confidentiality of the data being processed! The processing of confidential information by solutions providers is a critical challenge to consider. We should not be afraid to ask questions on this subject. Informed employees: Too often, artificial intelligence tends to frighten employees. Moreover, just as with any technological change, internal training is needed to ensure that the use of such tools complies with the company’s requirements. Thus, not only must the proper AI tools be selected, but the proper training must be provided in order to benefit from them.   Remus, D., & Levy, F. (2017). Can Robots Be Lawyers: Computers, Lawyers, and the Practice of Law. Geo. J. Legal Ethics,30, 501. Susskind, R.E. (1986) Expert Systems in Law: A Jurisprudential Approach to Artificial Intelligence and Legal Reasoning. The Modern Law Review, 49(2), 168-194. Supra, note 1. Id. kirasystems.com; diligen.com; luminance.com. https://www.litera.com/products/legal/contract-companion. legalsifter.com;lawgeex.com. Luis Millan, Artificial Intelligence, Canadian Lawyer (April 7, 2017), online: http://www.canadianlawyermag.com/author/sandra-shutt/artificial-intelligence-3585. http://ip.com/solutions/innovationq/; nlpatent.com. specif.io/index.

    Read more
1 2 3