Preparing today for tomorrow’s AI regulations

AI is rapidly becoming ubiquitous across business systems and IT ecosystems, with adoption and development racing faster than anyone could have expected. Today it seems that everywhere we turn, software engineers are building custom models and integrating AI into their products, as business leaders incorporate AI-powered solutions in their working environments.

However, uncertainty about the best way to implement AI is stopping some companies from taking action. Boston Consulting Group’s latest Digital Acceleration Index (DAI), a global survey of 2,700 executives, revealed that only 28% say their organisation is fully prepared for new AI regulation.

Their uncertainty is exacerbated by AI regulations arriving thick and fast: the EU AI act is on the way; Argentina released a draft AI plan; Canada has the AI and Data Act; China has enacted a slew of AI regulations; and the G7 nations launched the “Hiroshima AI process.” Guidelines abound, with the OECD developing AI principles, the UN proposing a new UN AI advisory body, and the Biden administration releasing a blueprint for an AI Bill of Rights (although that could quickly change with the second Trump administration).

Legislation is also coming in individual US states, and is appearing in many industry frameworks. To date, 21 states have enacted laws to regulate AI use in some manner, including the Colourado AI Act, and clauses in California’s CCPA, plus a further 14 states have legislation awaiting approval.

Meanwhile, there are loud voices on both sides of the AI regulation debate. A new survey from SolarWinds shows 88% of IT professionals advocate for stronger regulation, and separate research reveals that 91% of British people want the government to do more to hold businesses accountable for their AI systems. On the other hand, the leaders of over 50 tech companies recently wrote an open letter calling for urgent reform of the EU’s heavy AI regulations, arguing that they stifle innovation.

It’s certainly a tricky period for business leaders and software developers, as regulators scramble to catch up with tech. Of course you want to take advantage of the benefits AI can provide, you can do so in a way that sets you up for compliance with whatever regulatory requirements are coming, and don’t handicap your AI use unnecessarily while your rivals speed ahead.

We don’t have a crystal ball, so we can’t predict the future. But we can share some best practices for setting up systems and procedures that will prepare the ground for AI regulatory compliance.

Map out AI usage in your wider ecosystem

You can’t manage your team’s AI use unless you know about it, but that alone can be a significant challenge. Shadow IT is already the scourge of cybersecurity teams: Employees sign up for SaaS tools without the knowledge of IT departments, leaving an unknown number of solutions and platforms with access to business data and/or systems.

Now security teams also have to grapple with shadow AI. Many apps, chatbots, and other tools incorporate AI, machine learning (ML), or natural language programming (NLP), without such solutions necessarily being obvious AI solutions. When employees log into these solutions without official approval, they bring AI into your systems without your knowledge.

As Opice Blum’s data privacy expert Henrique Fabretti Moraes explained, “Mapping the tools in use – or those intended for use – is crucial for understanding and fine-tuning acceptable use policies and potential mitigation measures to decrease the risks involved in their utilisation.”

Some regulations hold you responsible for AI use by vendors. To take full control of the situation, you need to map all the AI in your, and your partner organisations’ environments. In this regard, using a tool like Harmonic can be instrumental in detecting AI use across the supply chain.

Verify data governance

Data privacy and security are core concerns for all AI regulations, both those already in place and those on the brink of approval.

Your AI use already needs to comply with existing privacy laws like GDPR and CCPR, which require you to know what data your AI can access and what it does with the data, and for you to demonstrate guardrails to protect the data AI uses.

To ensure compliance, you need to put robust data governance rules into place in your organisation, managed by a defined team, and backed up by regular audits. Your policies should include due diligence to evaluate data security and sources of all your tools, including those that use AI, to identify areas of potential bias and privacy risk.

“It is incumbent on organisations to take proactive measures by enhancing data hygiene, enforcing robust AI ethics and assembling the right teams to lead these efforts,” said Rob Johnson, VP and Global Head of Solutions Engineering at SolarWinds. “This proactive stance not only helps with compliance with evolving regulations but also maximises the potential of AI.”

Establish continuous monitoring for your AI systems

Effective monitoring is crucial for managing any area of your business. When it comes to AI, as with other areas of cybersecurity, you need continuous monitoring to ensure that you know what your AI tools are doing, how they are behaving, and what data they are accessing. You also need to audit them regularly to keep on top of AI use in your organisation.

“The idea of using AI to monitor and regulate other AI systems is a crucial development in ensuring these systems are both effective and ethical,” said Cache Merrill, founder of software development company Zibtek. “Currently, techniques like machine learning models that predict other models’ behaviours (meta-models) are employed to monitor AI. The systems analyse patterns and outputs of operational AI to detect anomalies, biases or potential failures before they become critical.”

Cyber GRC automation platform Cypago allows you to run continuous monitoring and regulatory audit evidence collection in the background. The no-code automation allows you to set custom workflow capabilities without technical expertise, so alerts and mitigation actions are triggered instantly according to the controls and thresholds you set up.

Cypago can connect with your various digital platforms, synchronise with virtually any regulatory framework, and turn all relevant controls into automated workflows. Once your integrations and regulatory frameworks are set up, creating custom workflows on the platform is as simple as uploading a spreadsheet.

Use risk assessments as your guidelines

It’s vital to know which of your AI tools are high risk, medium risk, and low risk – for compliance with external regulations, for internal business risk management, and for improving software development workflows. High risk use cases will need more safeguards and evaluation before deployment.

“While AI risk management can be started at any point in the project development,” Ayesha Gulley, an AI policy expert from Holistic AI, said. “Implementing a risk management framework sooner than later can help enterprises increase trust and scale with confidence.”

When you know the risks posed by different AI solutions, you can choose the level of access you’ll grant them to data and critical business systems.

In terms of regulations, the EU AI Act already distinguishes between AI systems with different risk levels, and NIST recommends assessing AI tools based on trustworthiness, social impact, and how humans interact with the system.

Proactively set AI ethics governance

You don’t need to wait for AI regulations to set up ethical AI policies. Allocate responsibility for ethical AI considerations, put together teams, and draw up policies for ethical AI use that include cybersecurity, model validation, transparency, data privacy, and incident reporting.

Plenty of existing frameworks like NIST’s AI RMF and ISO/IEC 42001 recommend AI best practices that you can incorporate into your policies.

“Regulating AI is both necessary and inevitable to ensure ethical and responsible use. While this may introduce complexities, it need not hinder innovation,” said Arik Solomon, CEO and co-founder of Cypago. “By integrating compliance into their internal frameworks and developing policies and processes aligned with regulatory principles, companies in regulated industries can continue to grow and innovate effectively.”

Companies that can demonstrate a proactive approach to ethical AI will be better positioned for compliance. AI regulations aim to ensure transparency and data privacy, so if your goals align with these principles, you’ll be more likely to have policies in place that comply with future regulation. The FairNow platform can help with this process, with tools for managing AI governance, bias checks, and risk assessments in a single location.

Don’t let fear of AI regulation hold you back

AI regulations are still evolving and emerging, creating uncertainty for businesses and developers. But don’t let the fluid situation stop you from benefiting from AI. By proactively implementing policies, workflows, and tools that align with the principles of data privacy, transparency, and ethical use, you can prepare for AI regulations and take advantage of AI-powered possibilities.

The post Preparing today for tomorrow’s AI regulations appeared first on AI News.

Abrir bate-papo
MVM INFORMÁTICA
Olá 👋
Podemos ajudá-lo?
Sair da versão mobile