Ultreia Strategic Management
What every leader needs to understand before August 2026.
Artificial intelligence is already part of how organizations operate.
Across functions and industries, AI is already informing decisions, automating processes, and shaping outputs in ways that were not possible five years ago. Most organizations adopted these tools gradually, often faster than their governance structures evolved.
That gap is no longer just operational.
It is now a regulatory one.
The EU AI Act is already in force, its reach extends well beyond Europe, and organizations across the Americas that do not map their exposure now will be managing a crisis instead of a strategy by 2026.
Working across Spain, Latin America, and the United States, a pattern keeps coming up.
In Europe, the EU AI Act is already part of internal conversations.
Outside Europe, it is still often treated as something external.
It is not.
The regulation applies based on where AI outputs have an effect, not where the organization is headquartered. Any organization whose AI systems influence decisions affecting individuals in the EU, process EU-related data in regulated contexts, or support operations connected to European entities may already be within scope.
Geography does not remove exposure. Impact defines it.
At its core, the EU AI Act is a simple idea applied at scale.
Not all AI carries the same risk, so not all AI should be treated the same.
The regulation classifies AI systems into four levels:
Risk tiers · Arts. 5–6
Unacceptable risk
Social scoring, biometric surveillance, manipulation.
Banned - Feb 2025
High risk
Employment, credit, biometrics, infrastructure.
Full obligations - Aug 2026
Limited risk
Chatbots, deepfakes, AI-generated content.
Transparency only - Aug 2026
Minimal risk
Spam filters, game AI, optimization.
No obligations
Accurate classification is the first step of any compliance program. Most teams underestimate how many systems qualify.
Most organizations will not be affected by the extremes. The real exposure sits in the middle.
High-risk systems include AI used for financial and credit decisions, hiring and workforce management, access to essential services, and other contexts where outcomes directly affect people or regulated environments. For these systems, the expectation is not just governance. It is evidence.
Organizations must be able to show how the system works, how it is monitored, where its limitations are, and how decisions are ultimately controlled by humans.
Penalties vary by type of infringement. For prohibited AI practices, fines can reach up to 35 million euros or 7% of global annual turnover. For non-compliance with high-risk system obligations, up to 15 million euros or 3%. In both cases, amounts are calculated against the global parent company's worldwide revenue.
The real shift is not the fine. It is the standard.
This is where most organizations outside Europe get it wrong.
The regulation is not triggered by where your organization is based. It is triggered by where your AI outputs have an effect.
If your systems influence decisions affecting individuals in the EU, process EU-related data in regulated contexts, or support operations connected to European entities, you may already be within scope.
Even if your team is in Brazil, your model is built in the US, and your platform is hosted somewhere else.
Geography does not remove exposure. Impact defines it.
This was intentional. The regulation was designed to prevent organizations from avoiding obligations simply by moving work across borders.
| Situation | Applies? | Why |
|---|---|---|
| US tax organization generating transfer pricing documentation for EU-based clients | Yes | AI output used within the EU |
| Brazilian audit practice using AI on EU-listed company engagements | Yes | Engagement output affects EU-regulated entity |
| Mexican consulting organization deploying HR analytics for a multinational's European workforce | Yes | AI decisions affect individuals in the EU |
| Caribbean organization with no EU clients and no EU data | No | No EU nexus in outputs or affected persons |
| Colombian organization using AI tools only for domestic client work | Review | No direct exposure now. Changes if EU clients are added |
These situations are more common than they seem.
A back-office team in Brazil supporting European operations.
A team processes financial data or flags anomalies using AI tools. The work happens in Brazil. The output affects European operations.
A strategy team modeling a Spain-Mexico expansion.
AI is used to evaluate market entry, pricing, or partnerships. The analysis is built outside Europe. But it informs decisions tied to European capital or entities.
A US organization acquiring a company in Spain.
AI supports due diligence, risk scoring, and financial modeling. The outputs directly influence an investment decision involving a European entity.
A recruitment team in Latin America hiring for Spain.
AI tools screen and rank candidates. The decision affects individuals in the EU.
Across all of these, the pattern is the same.
The work happens outside Europe. The impact does not.
One way to understand where this matters most is to follow investment flows.
Where European capital is present, European regulatory expectations tend to follow.
EU foreign direct investment in the United States totals €2.67 trillion, representing 28.7% of all FDI stocks held by the EU outside its borders. Europe overall accounts for 64% of all foreign direct investment in the country.
In Latin America, Brazil leads with €276 billion in EU investment stock. Mexico follows at approximately €205 billion, where Spain alone accounts for over 30% of inflows. Colombia, Chile, Argentina, and Peru round out the top recipients.
In the Caribbean, exposure is often structured through financial and offshore environments directly connected to European capital markets.
This is not a niche issue. It sits inside mainstream cross-border operations.
| Country / Region | EU Investment | Key Sectors | Exposure |
|---|---|---|---|
| United States | €2.67 trillion | Finance, pharma, tech, manufacturing | Very high |
| Brazil | €276 billion | Finance, energy, consumer goods | High |
| Mexico | €205 billion | Manufacturing, automotive, retail | High |
| Colombia, Chile, Argentina, Peru | Significant | Energy, finance, infrastructure | Medium |
| Caribbean offshore centers | €233–419 billion | Financial services, investment vehicles | Medium-high |
This is not something arriving in the future. It is already in motion.
The European Commission's Digital Omnibus on AI proposal, on which Parliament adopted its position in March 2026, would push the high-risk deadline to December 2027 for most systems and to August 2028 for AI embedded in regulated products. Until final adoption, August 2026 remains the legally binding date.
Prepare as if 2026 is real. Plan with flexibility if timelines shift.
This is not a compliance task to delegate. It is a leadership decision.
What to do with this
Map your AI footprint first.
Identify where your EU exposure actually sits.
Decide how to engage, not whether to engage.
Map your AI footprint first.
Not just formal tools. Every embedded AI feature across every platform your teams use. Most organizations cannot answer this question today. That inventory is the foundation of everything else.
Identify where your EU exposure actually sits.
Not all AI use triggers the same obligations. Financial decisions, hiring, and regulated data environments are where the most significant requirements concentrate.
Decide how to engage, not whether to engage.
The question is no longer whether this regulation reaches your organization. For most organizations across the Americas with any European connection, it already does. The strategic question is whether you engage on your timeline or on a regulator's.
Regulation creates constraints. It also creates positioning.
Organizations that move early will not only reduce risk. They will build capabilities that others will need. The AI governance market is projected to grow from roughly $300–500 million today to between $3.6 and $5.8 billion by 2033.
The EU AI Act is not a European issue. It is a cross-border operating condition.
And like most cross-border realities, it shows up in execution before it shows up in strategy.
If your organization is already using AI in work connected to Europe, the exposure is likely already there. The question is whether you map it now, or explain later why you did not.
Tamary Diaz Otero is the Founder and Principal Advisor at Ultreia Strategic Management. She works with CEOs and senior leaders navigating international strategy, cross-border growth, and complex organizational decisions across Latin America, the Caribbean, and the United States. She is based between Puerto Rico and Spain.
If this raised questions relevant to your organization, she would like to hear from you.
Seguimos. 🐚