Law & Technology: Risks and Opportunities from the Tectonic Forces at Work | White & Case LLP International Law Firm, Global Law Practice
Law & Technology: Risks and opportunities from the Tectonic Forces at work

Law & Technology: Risks and Opportunities from the Tectonic Forces at Work

Table of Contents

What the technology future holds

Big Data and IoT

AI and Robotics

FinTech and Blockchain

Cybersecurity and privacy

Rapid developments in information technology are changing how people work, consume, play and interact. Government policy will influence the direction of technological developments, and laws and regulations will undoubtedly need to change to address the new reality.

A good example of the multifaceted advance of new issues comes from the field of medical research. Artificial intelligence (AI) and "Big Data" are accelerating the application of genomic information to the diagnosis and treatment of cancer and chronic diseases. An individual's genome contains a great deal of personal information that current privacy laws may protect—even small bits of information can be abused if handled improperly. On the other hand, in order to promote innovative treatments, new regulations may be needed to allow the sharing of genetic information with researchers across platforms and national borders, raising thorny issues under relevant privacy regimes. Ethical issues, such as the potential demand for "designer babies," can also emerge once the predictive power of genetic technologies is further developed.

A similar array of relationships and issues can be found in numerous other sectors, such as finance (e.g., AI, fintech, and fiduciary duty), the automotive industry (e.g., robotics, the Internet of Things (IoT) and liability) and consumer household items (e.g., IoT, cybersecurity and personal privacy). Indeed, the list is almost endless.

These issues were explored recently at a White & Case seminar series in Tokyo. The series—"Law & Technology: Risks and Opportunities from the Tectonic Forces at Work"—featured lectures and discussions on the rapid changes occurring in the technology sphere, such as AI, IoT, Big Data, fintech, cryptocurrency and cybersecurity—all fascinating advances heard about almost daily yet that are challenging the existing rules. It examined the changes that technology will bring to our society, the opportunities technology might offer, the drawbacks that might follow, and the intersection between technology, regulation and the law.

 

What the technology future holds

There are essentially two schools of thought on what impact technology is going to have on humankind in the future. The technology pessimists fear developments in AI, Big Data, the IoT and other areas will somehow replace human interactions and decision-making and, ultimately, rob us of what it means to be human. On the other side are the technology optimists, who believe these advances will benefit humanity and help us to solve many of the problems that we face today and that would otherwise be worse in the future.

Keynote speaker Kenneth Cukier, author and senior editor for The Economist and now based in London after several years in Japan, clearly falls into the technology optimist category. He cited the example of how JR East applies "deep learning" to keep watch on the thousands of kilometers of railway tracks that need monitoring on a virtually constant basis. Until now, this critically important yet tedious task has been on the shoulders of human inspectors—all of whom are prone to human fallibility or a wandering mind, no matter how dedicated they are to their jobs.

JR East has started attaching cameras to the front of some of its trains to capture images of the track during the vehicles' regular operations and then analyzes them to identify potential faults.

"Here is a perfect example of where AI could eliminate jobs but probably doesn't," said Cukier. "There is a cost saving, but you can now invest the same amount of money on more people, but you will have better rail safety—and we can all agree that is a good thing."

Behind JR East's new safety device is data. And behind the data is an algorithm and, ultimately, a technique called machine learning, which attempts to spot patterns in the data. And all of this is a far cry from the outset of the digital age, when programmers told computers what to do and not to deviate from those actions. Today, Cukier pointed out, computers are given access to vast amounts of data and encouraged to extract what they need to make an "informed" decision, and the results are now better than when computers were given more narrowly defined directions.

Machine learning can be traced back to the 1950s, but is today ubiquitous—down to the Washlet developed by Japan's Toto Ltd. that develops an understanding of when the bathroom is most likely to be used and automatically warms the seat in anticipation.

Yet Cukier anticipates an even larger leap from warm toilet seats in the near future, with autonomous cars today becoming reality and serving as a harbinger of the future benefits of AI.

"In the next five, 10, 20 years, we are looking at a new wave of computing in society, very similar to what happened around 1995 to 2015," he says. "Then it was the internet applying to all manner of life; now it is going to be AI applying to all manner of life and we are going to take every problem we can, turn it into a prediction problem and then solve it with AI. He adds, "By AI, I mean machine learning, deep learning, reinforcement learning, all the different flavors of taking data, with algorithms, with programs that write themselves—because the world is too complex to specify all the rules at the outset."

“In the next five, 10, 20 years, we are looking at a new wave of computing in society.”
Kenneth Cukier

This application of AI, Cukier believes, will be visible in three ways: it will increase the scale of what humankind can do; it will increase the speed at which we can operate; and it will improve upon the accuracy of what a person would otherwise achieve. And the applications are countless, he points out, and already being explored. AI has, for example, been used in pathology—where it was able to identify more markers that indicated a cell sample was cancerous than were known to scientists.

Inevitably, there will be challenges to technological advances. Privacy will be a major consideration, and it is already impacting the medical sector, where algorithms that could potentially predict infections and prevent patient illness cannot be applied due to privacy concerns.

A second challenge is explainability, or being able to conclusively identify the reasons for a course of action, which is not always the outcome with modern AI techniques because of the way the computer can be programmed to make its own decisions.

"The point is that we have a moral obligation to use artificial intelligence, so we need to solve these two thorny issues," says Cukier. "As we use AI, we are going to apply it to climate change, we are going to think about curing diseases; we'll use AI to manage our cities and to manage our stock portfolios.

But, in the end, it's really going to throw a question mark on what it is to be human and what it is to 'know'."

Legislative changes needed

In the past, the world's economies have essentially been decentralized networks in which international trade was overseen by shared laws and agreements. Now, technology permits a shift to a distributed network in which everybody is connected to everybody—but the laws have not kept pace with that revolution, according to Robert Alan Feldman, professor, Tokyo University of Science and senior advisor, Morgan Stanley MUFG Securities.

Keynote, Robert Feldman

Arguably, the most critical problem facing Japan is demographics. With the population aging and living longer and fewer children being born, the nation needs to devise ways to increase productivity or to deepen the capital stock. And a necessary condition for better capital is redesign of existing legislation and corporate governance in order to spread technology faster, Feldman says.

One hurdle in Japan is vested interests, particularly visible in the energy sector, which will require regulatory changes. But the example of California in the 1950s, when smog was choking parts of the state, demonstrates that legislation can encourage technological progress – in that case clean-running vehicles.

Blockchain technology will serve lower costs and improve services to consumers. In the media sector, technological changes required new rules—"social rules, not so much legal rules," says Feldman—to ensure the accuracy of information.

Turning to the future of capitalism, Feldman cited Joseph Schumpeter's 1935 book Can Capitalism Survive? in which the political economist stated that new technologies challenge oligopolies and monopolies. That is positive, he says, because it reduces market distortions. Yet the process of capitalism also bring "creative destruction," which causes volatility in the economy by destroying old jobs even while creating new, more productive jobs. It is important to control that volatility, Feldman says, "and that is why it is important for us to get the regulatory side of things right. What we need, he says, "are laws that enhance competition, accelerate retraining and then set evidence-based regulatory metrics."

 

Big Data and IoT

There has been a rapid increase in the amount of information available, the speed at which it moves, and the importance that it plays in virtually every facet of our lives. Rapid evolution in Big Data, AI, IoT and other technological advances pose challenges to society, regulators and those tasked with ensuring the legal environment keeps pace with that evolution.

We need to ensure that society "gets extra value" out of these advances, says David Albagli, a White & Case local partner in Tokyo, in order that they "seamlessly enter into our lives" and impact them in a positive way.

Albagli opened the seminar discussion by asking whether the old assumption that a company's employees are its biggest asset is no longer valid and that, today, data is the most critical corporate asset.

Most Japanese companies have a fundamentally different mind-set to their US counterparts, says Hiroshige Sugihara, Founder and President CEO of Digital Aid, Inc., in part due to a chronic shortage of IT engineers, with most Japanese firms reaching out to providers to handle the IT side of their operations. US firms, on the other hand, tend to keep their IT operations in-house. Corporations need to identify their most critical data and ensure that it is adequately protected, says Sugihara. For most firms, he adds, the introduction of AI or Big Data is less important than simply making sure that existing data is secure.

The growth of data as a service, including across international borders, is noteworthy. It is critical for a company to protect its data assets while, at the same time, it is equally important that methods are devised to ensure the privacy of data in an era in which we are all increasingly interconnected through social media.

From left to right: David Albagli, Yoshitaka Sugihara, Hiro Sugihara and Masami Takahashi

What regulations should companies use?

Regulation of developing technologies is complex, and it becomes significantly more complicated when crossing borders, as individual nations set their own rules. This raises the question: Should companies set internal rules that comply with the strictest standards that are already in place, or is it better to match regulations to local requirements?

Based on his experiences at both Uber and WeWork, Masami Takahashi, General Manager of WeWork Japan, says that as global platforms, members want a system that offers a similar experience in every market in order to reduce confusion. Legal compliance should work in the same way, adhering to the highest standards—although that might not always be easy to achieve in practice.

“The best solution is for companies and governments to reach the optimum balance between ‘hard laws’ and self-regulation.”
Yoshitaka Sugihara

While a global legal framework might be the most desirable situation, says Yoshitaka Sugihara, Co-Chair of the Digital Economy Committee and Vice President of the American Chamber of Commerce in Japan (ACCJ), the reality is that each country has its own culture and existing regulations on security and privacy, meaning that part of his challenge is to create a "soft landing" between global and local rules. The best solution, he says, is for companies and governments to reach the optimum balance between "hard laws" and self-regulation. But he agrees that the authorities overseeing compliance can take far too long—the Japanese government typically takes two years to pass a new law, he points out, which is an eternity for nimble tech firms.

Privacy or convenience?

Companies and individuals never used to complain when anyone could pick up a telephone directory in the past and simply find a person's phone number, notes Hiroshige Sugihara. Anyone who is involved in business must, he says, accept that a certain amount of their information is going to be readily available and that they just need to make sure that the critically important data is well protected.

Measures should be in place to stop a few people from holding individual's data—effectively holding it hostage by charging for access—he adds, and power should be given "back to the people." It is wrong, he believes, to make people pay for access to their own data if they need medical treatment, for example.

 

AI and Robotics

Investments in the AI sector are burgeoning. 2017 saw US$12 billion of venture capital investment into AI firms globally, with a further US$22 billion of global M&A activity related to AI, notes White & Case partner Benjamin Saul.

Similarly, Facebook's artificial intelligence translation service is running 4.5 billion requests every day, Google Home or Amazon Echo are in 20 percent of US homes, and AI is providing investment advice in the financial services sector, diagnoses in hospitals and broadly reshaping engagement around the customer.

AI and robotics raise numerous legal issues, including data privacy, rights and ownership, AI decision-making, and liability for AI and robotics gone wrong. What is the best way to address these legal issues? Answers to these questions will shape how AI consumer lenders, manufacturers of self-driving vehicles and drones, to name just a few, conduct their businesses and seek to promote innovation.

How much regulation?

From a technologist's point of view, less regulation is better, says Tobias Pfeiffer, lead engineer for Preferred Networks, Inc. But, he adds, it is clearly important that companies not be permitted to release defective or even dangerous AI products onto the market. Perhaps one yardstick for machine learning decisions would be to permit their use only after it has been demonstrated through statistical measuring that the outcomes achieved by machine learning or some other technology surpass those produced by human beings, he suggests.

While broadly agreeing that the fewer regulations hampering business the better, Yasushi Kusaka, chairman and COO of the Robot Fund, notes that regulation remains important in areas such as data protection or security, as technology continues to make advances.

Dealing with bias

A key regulatory issue with AI is ensuring that AI doesn't make decisions that discriminate against a certain group of people in society because of their race, ethnicity, gender, other protected characteristics or proxies thereof—decisions that would potentially be illegal and could ruin a company.

“There will be an increasing need to explain decisions that are taken by AI and to provide transparency.”
Benjamin Saul

There will be an increasing need to explain decisions that AI makes and to provide transparency, says Saul. Bias can potentially creep into a deep learning system at the input stage, the training stage or elsewhere throughout the creation of the technology, he says, adding that the situation in the United States is "particularly vexing" from a legal perspective and requires "explainability" and action to identify and neutralize bias. German lawmakers have taken steps to address this problem, according to Pfeiffer, through regulations that, for example, prohibit a decision on a personal loan to be based only on a person's address, and he notes that similar data points can also be excluded for different assessments.

While data inputs can be controlled, the problem deepens when the artificial intelligence makes connections that are beyond human involvement, according to Saul. In the US, regulatory technology is being pitted against the technology to "stress test" the deep learning set of algorithms that are used, for example, to produce a decision on a consumer loan.

Human bias is an issue in retail financial services, Kusaka agrees, adding that AI can provide a more objective decision, but that many issues still need to be addressed before deep learning can be applied without discrimination.

Should developers make their techniques open-source?

Making software source code free and available to all gains attention for a company, according to Pfeiffer, and collaboration helps to improve the product. Yet the secret to success remains high-quality data that is annotated and "concrete algorithms." Most client companies, on the other hand, feel that the "democratization of data" is "absolute sacrilege," says Saul, although it may in the future be forced upon some sectors.

In the financial services sector, banks and securities companies have traditionally considered client data to be theirs and little was disclosed. However, Kusaka observes, that is slowly changing to allow institutions to share such data and for the individual to possess his or her own data, which will serve to "enhance visibility."

The same issue is "vexing" the industry in the US, Saul agrees, although algorithmic techniques are becoming so sophisticated that even data that has been anonymized can now be traced quite accurately back to its original owner—which raises some potentially difficult new issues under the existing data privacy frameworks.

 

FinTech and Blockchain

The combination of finance and technology (fintech) is not new, but developments with blockchain technology and artificial intelligence (AI) have created new business opportunities as well as risks. Rapidly advancing technologies in areas such as mobile and digital banking, payment and settlements, data analytics, marketplace and online lending, digital tokens and currencies, personal finance, foreign exchange and "regtech" will necessarily interact with growing regulation and legislation regarding them.

Up to now, industries have been regulated in a "vertical sense," says Arthur M. Mitchell, a senior counselor and head of the law & technology group for White & Case in Tokyo. This means that banks, fund managers or securities firms have largely stuck to their areas of specialty. The advent of technology giants such as Google and Amazon has changed that landscape, however, and regulators in jurisdictions worldwide need to devise the best way of policing these new players in the market.

Fintech investment and regulation

Companies' investments into fintech may hinge on the state of the regulatory framework. Approximately 6 percent of expenses are currently allocated to technology functions at US financial firms, says Michael Alfant, President and Chief Executive Officer of Fusion Systems Co., Ltd.—although in Japanese companies, there has been a reluctance to replace legacy systems, with 80 percent of expenditure going towards maintenance. Ideally, a CEO needs to be able to predict the regulatory framework that is likely to develop in his or her company's sector in the next three to five years and to use that knowledge to advise the board on where to spend. In Alfant's opinion, spending on legacy technology is not a productive use of capital.

Predicting regulations is a difficult task, however, notes Nels Hansen, a local partner with White & Case in Tokyo; and so is the issue of whether a company operating in the global financial marketplace can or should comply with regulations solely in its home jurisdiction (and thus isolate itself and its products from other jurisdictions' customers and laws), or whether it would benefit from spending, or would need to spend, the time, energy and funds to achieve compliance with the rules in every country in which it might be said to "operate."

Blockchain and cryptocurrency represent an opportunity to disrupt both dominant online platforms and autocratic governments through a decentralized, direct-to- customer approach, says Hironao Kunimitsu, founder and CEO of gumi Inc., an entertainment company that develops, operates and distributes online games for smartphones and tablets in Japan and internationally. When well-executed, these technologies may not only avoid excessive control being leveraged by governments, but may also help to develop a more competitive marketplace that benefits all consumers, he suggests.

Hironao Kunimitsu

US regulatory system

The regulatory system in the US is "very fractured," with regulation at both the state and national level, and within the national framework, oversight is further divided across multiple agencies, such as the Federal Reserve, according to Saul. However, he adds, there is a clear trend at the federal level towards increased knowledge-sharing around financial technology issues, along with an increasing acceptance that there needs to be inter-agency cooperation to balance the promotion of innovation and responsible oversight. There have been similar moves in London and Singapore recently.

“Adoption of these technologies by the public sector in the RegTech context might have the effect of encouraging regulators to accept them in the business world.”
Benjamin Saul

Legislatively, there has been little movement on AI in the US, although a number of senior politicians have taken up the issue and are focusing on explainability and algorithmic accountability, says Saul, while there have also been legislative movements in other jurisdictions around the world. He adds that global standards are unlikely to be adopted in the near future, although there is arguably a need for that approach because the technology cuts across so many industries. Failure to do so could lead to the stifling of innovation.

One concern, according to Hansen, is that regulators could impose excessive regulation in a knee-jerk reaction to disruption in an industry that is perceived to have been caused by technology, which could also constrain future development and positive advances.

Kunimitsu suggests that it was "unfortunate" that blockchain technology has been initially applied to cryptocurrencies because the sector is so sensitive and problems with virtual currencies—such as the hacking of the Mt. Gox bitcoin exchange in 2014—has led to Japan "lacking flexibility" in the regulations that it has introduced. The situation becomes even more complicated, he points out, when transactions cross international borders and are subject to different legal requirements. The countries that come up with regulations that are appropriate and flexible the soonest will turn out to be the winners for the business.

The solution may be to keep the regulators at a distance and let the free market play out, suggests Alfant. Adoption of these technologies by the public sector in the RegTech context might have the effect of encouraging regulators to accept them in the business world, says Saul, with signs that blockchain technology is starting to take hold in certain areas in the US, such as property records, although uptake in government is "in the very early stages."

Minimizing risk of future regulations

How might a firm minimize the risk of future regulations having a negative impact on the technology that it deploys in a jurisdiction? Identifying the "right and safe approach" is the "million-dollar question" for many firms at the moment, says Saul. Given the uncertainty, he says his best advice is to be in dialogue with the regulator about future plans "so you can have an informed, considered, and as justifiable as possible approach."

 

Cybersecurity and privacy

The world's digital transformation has brought increased challenges to the safety and security of digital data. Concerns over the ownership of personal data and its uses are emerging worldwide. There has been a sharp increase in the frequency, velocity, sophistication, severity, and impact of cyberattacks against companies and organizations globally.

What are the best practices for protecting personal information and the integrity of digital systems while making room for the innovation promised by new technologies?

Companies under attack in Asia

A 2017 global study by Kroll International, Inc. found that 86 percent of surveyed companies experienced a cyber incident or information/data theft, loss or attack in the last 12 months, according to Tam Huynh, Senior Director of Cyber Security and Investigations for Kroll.

The most common attacks are virus or worm infections that exploit loopholes in internet-facing systems, such as a website or email server, or "spear fishing" attacks via emails. Attacks can also come through employees or vendors.

In January 2018, a hacking attack was made against a cryptocurrency virtual money exchange in Tokyo in which an estimated US$500 million was lost. Similar high-profile attacks have been reported recently throughout Asia, including Malaysia, Bangladesh and Hong Kong, where personal information on 6.75 million people was illegally accessed.

Growing government regulation

Alarmed at this increase in cyberattacks and privacy breaches, governments have started to tighten their controls, with the European Union's General Data Protection Regulation (GDPR) the latest attempt to stem activities in this area. Adopted by the EU in 2016, the regulation went into effect on May 25, 2018. One of the most significant components of the new regulation is that it requires organizations in Europe and, in certain circumstances,organizations around the world that handle data from individuals in the EU to report data breach incidents to regulators within 72 hours of the discovery of the breach. The penalties for failing to do so are steep—up to 4 percent of a company's annual global revenue. Other countries are adopting similar cybersecurity and privacy laws, including Australia, Singapore and Japan, notes Huynh.

From left to right: Detlev Gabel (screen), Jun Usami, Tam Huynh and Shino Asayama

Corporate programs needed

As well as legislation at the national level, it is critical for companies and organizations to implement effective cybersecurity breach management programs "that must be endorsed and supported all the way from the top," Huynh recommends.

“It is critical for companies and organizations to implement effective cybersecurity breach management programs.”
Tam Huynh

Employees who are aware of the potential threat to their organization can serve as an effective first line of defense, underlining the importance of instructing staff about the potential threats that exist. Kroll also recommends that companies strengthen their cybersecurity risk management through exercises for all staff.

Liability risks in Japan

Companies operating in Japan could face administrative or civil liability as a result of a security incident, as well as a loss of reputation once the case is reported in the media, says Shino Asayama, a White & Case associate in Tokyo. Under Japanese law, however, there is presently no obligation to report a security breach related to personal information to the authority (Personal Information Protection Commission), although the rules say that companies should endeavor to do so. And, if a company's failure to adequately protect data constitutes a breach of the terms of the contract, it can be sued for damages for failing to meet its contractual obligations.

For example, in the 2006 Yahoo! BB case, an employee of an outside company with access to the company's system illegally obtained personal data on 4.51 million customers. Despite being the victim and claiming it had no way of predicting the criminal actions of the outside employee, the court decided Yahoo! BB had breached its duty of care to its customers.

As a result, companies operating in Japan need to have appropriate security measures in place in case there is an external attack on company systems or an internal leak (often caused by contractors who have the right to access a company's system). They also need to ensure adherence to legal and official guidelines.

Five components of cyber security risk management system

An effective cybersecurity risk management system has five basic components, according to Steven R. Chabinsky, a White & Case partner in Washington, DC. They include:

  • Consideration of the specific technology risks the company faces and the establishment of priorities
  • Implementation of appropriate protection in the form of technical, administrative and physical controls
  • The ability to detect and properly evaluate security incidents
  • The ability to respond swiftly to a breach and to bring together all sections of the company to stop a breach from worsening, while communicating effectively and while complying with any legal and contractual obligations
  • The ability to recover from a cyberattack, get back to business and make sure the problem does not occur again

The head of a cybersecurity department should be able to predict the likely impact of a breach or a hacking attack that makes sensitive data or entire networks unavailable, while it is also important to determine the implications for a brand, its operations and finances. The key, Chabinsky says, is to have adequate, business-appropriate measures in place in advance of an incident and staff trained in how to implement them. Vulnerability scans, annual risk assessments and third-party penetration tests are extremely useful, along with written plans that are helpful for any eventuality. There are also clear benefits to ensuring the use of role-based restrictions to limit employee access to sensitive data or the ability to add or alter software, the use of multi-factor authentication to sign in to sensitive or remote systems, and the use of encryption for stored or transmitted data. Another key control is having an effective backup strategy for data that is lost or maliciously destroyed.

When it comes to detection capabilities, Chabinsky cautions that older security products are no longer a guarantee against a cyberattack because hackers constantly change their methods, and he adds that scenario-driven tabletop exercises have proved an effective learning experience for companies concerned about a breach to test their readiness to respond effectively across the organization.

EU's GDPR regulation

As noted above, the EU's GDPR went into effect on May 25, 2018.

The GDPR replaces outdated rules and unifies how the EU regulates the processing of personal data. The new rules now have a global scope that requires all firms doing business in the EU to adhere to the regulation and have greater transparency. They also impose stricter limits on the use of personal data, raising the bar for compliance significantly higher. Finally, the GDPR dramatically increases the penalties for noncompliance to €20 million or 4 percent of the organization's worldwide revenue, whichever is higher.

“It is vital for organizations to consider the impact the GDPR will have on their operations.”
Detlev Gabel

Consequently, it is vital for organizations to consider the impact the GDPR will have on their operations, says Detlev Gabel, a White & Case partner in Frankfurt. One of the core concepts of the GDPR is that businesses are required to ensure the security of the personal data they process, including against unauthorized or unlawful processing, accidental loss, destruction or damage.

Under the GDPR, companies are responsible for and must be able to demonstrate that they are compliant with the regulation, and one way to achieve that is through compliance with an acknowledged certification mechanism. Businesses using service providers for the processing of private data must also ensure that those companies comply with the GDPR.

If a successful attack occurs, it must be reported to the competent authority within 72 hours of a company becoming aware of it—"a challenging deadline," says Gabel, because it requires an organization to identify, review and report on a problem under great pressure. Breaches involving high risk to the rights and freedoms of individuals must also be reported to anyone who has been affected, permitting those individuals to take steps to protect their private information.

In addition, businesses must comply with the EU's directive on the security of network and information systems, which went into effect in August 2016. The directive is designed to protect networks and systems as a whole, imposing notification requirements on companies that operate essential services, such as energy, water, banking, healthcare and digital infrastructure, Gabel notes. Similarly, digital service providers such as search engines and cloud computing companies are required to comply with certain security and notification requirements. In the event of a breach, security authorities and data protection authorities are required to cooperate in order to deal with an issue.

Companies that have not yet reached compliance with the GDPR need to create a road map with clear tasks, responsibilities and milestones, Gabel recommends. There is a need to start building awareness throughout an organization and to set up a data protection team under a leader. At this point, critical issues need to be prioritized, and some "quick wins" can be generated by ticking off some easy-to-reach requirements. Finally, he adds, companies should seek advice from experts, if needed, as it can save a lot of time and money.

Conclusions

It is clear that we are seeing a new era of dramatic technological change unfold before us, which is reshaping existing industries and catalyzing the growth of new ones. With each change and advance, we see these new forces testing laws and regulations.

Throughout the seminar series, the panelists' comments reflected the idea that there is a strong driving force to deliver new technologies to the market and society. Yet there was an instinctive realization that policy, regulation, and law will shape the ways in which the technologies develop.

Of course, numerous difficult questions will need to be resolved. What is the right balance between regulation and innovation? Where do we draw the line between "privacy" and "access to information"?

Maintaining resilience in the face of unintended consequences is warranted, as the pendulum could swing towards overregulation as a result of perceived harms to the public as technology pushes into the unknown.

 

White & Case Technology Newsflash

 

This publication is provided for your convenience and does not constitute legal advice. This publication is protected by copyright.
© 2018 White & Case LLP