Futuristic technologies are being adopted at an unprecedented rate — millions of smart speakers are being sold across the U.S., smart homes are in the making which deploy several internet-of-things devices, and self-driving vehicles are being tested across many states.

But even as these technologies are coming ever closer to realization, we haven’t yet assessed their usage, impact and security protocols fully. While these technologies become commonplace, the security aspect, especially, has been largely ignored both by the government and the companies backing them.

That said, the government has recently begun to act on the issue, making a start with the security guidelines for smart homes. Still, the little bit they have done, like the proposed  The Internet Of Things Cybersecurity law,  is inadequate as these guidelines cannot be a one-time exercise; they need to be updated periodically -- at least every quarter -- after assessing the risk environment. Currently, as it stands, the bill just proposes that the hardware be certified by the U.S. government.

According to Senator Ron Wyden (D-Oregon), an obvious market failure has occurred in the case of such devices and has left device manufacturers with little incentive to take care of the security aspect of such devices. For example, Google now issues monthly security patches for its Android smartphones but no such updates are currently available for the Google Home speaker or Android Wear devices.

Most smart home devices come with AI-enabled voice assistants, and they have access to a vast tranche of personal information.  But the proposed law does not offer any guidelines with regard to the security of such information.

Simply put, as our cars, home appliances and even our work environments grow smarter and more connected, the security risks associated with such technologies are also accelerating at a high pace. And our regulatory system is still playing catch-up.

Self-driven cars, for example, despite the heavy investments made world over by auto and tech companies, do not have any basic security guidelines. According to a recent report, many self-driven cars used the same chip as the first generation iPhone, which would make them an easy target for hackers.

In fact, the flaw would also let hackers send ransomware to a self-driven car — imagine being stuck in a self-driven car, which is actually being controlled by a hacker. It could potentially become a life-threatening-cum-ransom situation.

And self-driving and smart homes are not just the only technologies we are talking about. Artificial intelligence is being adopted at very fast rate and across an ever-expanding array of fields. While it does make life easy, the fact remains that AI is based on algorithms and if a base algorithm is tampered with, AI can also be reprogrammed.  

To put things in perspective, what we could have at hand, with respect to AI, is not a doomsday scenario with Skynet-like technology taking over. Simply because the technology hasn't progressed that far. But intruders can cause large-scale disruptions in AI-based processes by introducing even a minor flaw in the algorithm.

"Machine learning (ML) algorithms—the tools that allow AI to exhibit intelligent behavior—need data to function properly and accurately," cyber security expert Adam Segal wrote on the Council on Foreign Relations website in an article titled "The Cybersecurity Vulnerabilities to Artificial Intelligence" published Monday. "While it is possible to make better predictions without ample data, ML algorithms are more accurate with more data. Thus, the primary method for compromising AI so far has been through data manipulation. If data manipulation goes undetected, any organization could struggle to recover the correct data that feeds its AI system, potentially having potentially disastrous consequences in healthcare or finance sectors."

The way we are becoming dependent on AI and since it is expected to replace humans in many jobs, the risks that such flaws carry are huge. Unless and until these risks are properly assessed and preventive measures to plug vulnerabilities are put in place, AI adoption needs to be closely monitored.

The fact remains that the security risks associated with such evolving technologies need to be studied further and acted upon before such technologies are widely adopted. Strict security guidelines need to be put in place by governments, while tech companies need address the issue more seriously, and start issuing regular updates to plug vulnerabilities the way they currently do for smartphones.

In conclusion, the implementation of security guidelines and protocols for such devices is the need of the hour, even though progress has been slow on that front.