A Strategist's View on AI, Policy, and Competition

Scott Weiner serves as AI Lead at NeuEon, where he helps organizations navigate the rapidly evolving AI landscape. An advocate for balanced innovation policy, he believes technologists and policymakers must work together to ensure America remains the place where the future is built. These are his reflections from the Competition Hill Briefing on June 12, 2025.

I never expected to find myself in a congressional hearing room talking about drunk people ordering pizza. On June 12, the marble halls of the Capitol building felt heavy with history as I walked in that morning, my laptop bag slung over my shoulder, wondering if I could really make a difference in how America approaches AI policy.

The team from Act and our panel were the first to enter, the chairs all lined up facing the podiums. An array of water bottles were laid out which was great because it was a pretty hot day and I was feeling dry. Sunlight filtered softly through the tall windows into the congressional hearing room, casting gentle reflections on polished wood tables and deep-blue walls lined with dignified portraits. Flags quietly frame the room, where congressional staff filtered in grabbing up the boxed lunches the team provided, found their seats and began to attentively listen closely, absorbing the significance of the moment unfolding before them.

Morgan Reed, president of the App Association (Act) and our host, opened with that lighthearted example about voice AI systems that can decode slurred late-night pizza orders and still get the pepperoni and pineapple delivered to the right address. The room erupted in knowing laughter. Congressional staffers, it turns out, have ordered their share of late-night pizza. But Morgan’s point landed: AI is already everywhere, from small-town pizza shops to Fortune 500 boardrooms.

I was there representing NeuEon, where I serve as AI Lead helping organizations navigate AI adoption and strategy. Sitting alongside me was AI startup founder, Andrei Papancea, law professor Christopher Yoo, and policy analyst Jessica Melugin. The packed room and the intensity of attention told me everything I needed to know about the stakes. As I told the audience early on, I’m learning new AI tools for my job every single day, so trying to stay ahead of this technology feels like drinking from a fire hose. The thought of Congress trying to write rules for something evolving this fast was both encouraging and terrifying.

Over the next two hours, we tackled fundamental questions shaping America’s AI future: how to define AI’s ecosystem, whether startups can truly compete with Big Tech, if America’s lighter regulatory touch beats Europe’s approach, how to handle the avalanche of state AI bills, the promises and perils of open-source AI, and perhaps most importantly, how to prepare people for an AI-driven world.

Making Sense of the AI Stack

Our moderator started with simple definitions: “What is the AI stack, and where are new business models emerging?” I glanced around the room, seeing genuine curiosity on faces, and realized we needed to start with the basics.

“There isn’t one simple AI stack,” I began, echoing what Professor Yoo had noted. AI covers everything from the sensors in self-driving cars to diagnostic algorithms reading medical scans to the large language models everyone now associates with ChatGPT. An autonomous vehicle’s AI has almost nothing in common with an AI medical imaging tool. Trying to regulate them identically would be like having the same rules for airplanes and bicycles because they’re both transportation.

Instead, I explained, think of AI in layers. At the foundation, you have core technology and hardware. Everyone fixates on NVIDIA’s AI chips right now (NVIDIA’s market cap briefly made them one of the world’s four most valuable companies), but as Professor Yoo pointed out, that dominance might be temporary. GPUs built for video games just happened to be perfect for AI calculations. New chip designs could change everything tomorrow.

Above the hardware layer sit foundation models like GPT-4, massive neural networks trained on broad datasets. Then come specialized applications built on those foundations: customer service chatbots, fraud detection systems, the pizza-ordering AI that got us all laughing. The crucial insight is that this stack stays dynamic. No single company controls all the layers, and today’s leaders might be tomorrow’s footnotes.

For policymakers, this complexity demands flexibility. A one-size-fits-all rule for “AI” will miss the mark every time. AI in finance needs different treatment than AI in healthcare. Any static definition will be obsolete before the ink dries. Rather than chasing a grand unified theory, we urged everyone to focus on specifics: specific use cases, specific risks, specific sectors. That’s where practical policy lives. 

David vs. Goliath: The Competition Reality

With this foundation established, the conversation naturally turned to competition dynamics. The conversation shifted when someone asked whether giant companies with massive data centers would inevitably dominate AI, crushing startup innovation. 

“In many ways, smaller companies are more nimble in the AI space,” I said, “especially at the application layer.” The room seemed skeptical, so I broke it down.

I shared what I saw with our clients: “A team of five engineers with a near-zero budget can sometimes beat billion-dollar models by narrowing the problem scope and fine-tuning on superior data.” For example PathAI outperforms general AI models in pathology image analysis by using curated medical datasets. Casetext developed specialized LLMs trained on legal corpora, providing more accurate legal research than general chatbots. Organizations like Huggingface enable startups to leverage open-source models and APIs, lowering barriers to entry and fostering innovation as well. The key insight here is that bigger doesn’t always mean better in AI. Quality and focus often trump quantity.

The numbers back this up. According to recent venture capital data (https://www.mitrade.com/insights/news/live-news/article-3-766642-20250417), AI startups grabbed 57.9% of global venture capital investment in Q1 2025, up from just 28% the previous year. Investors are betting big on new players because they see the opportunities.

Jessica jumped in with an important warning: “Well-intentioned rules meant to rein in tech giants could inadvertently crush startups if not carefully tailored.” A staffer in the back row was taking notes furiously. This seemed to be landing.

The real creative destruction is happening at the edges, in specialized applications where nimble teams can move faster than large corporations. Our job was to make sure Congress understood that protecting competition means protecting the ability of those small teams to thrive.

Regulatory Approaches: The Tale of Two Philosophies

About halfway through our session, the conversation inevitably turned to the elephant in the room: Europe’s aggressive approach to tech regulation. The contrast couldn’t be clearer, and I could see staffers perking up as Jessica laid out the scorecard.

She delivered the stark reality: “Eight of the world’s ten largest tech companies are American. Zero are European.” The room went quiet. “By market cap and global influence, the U.S. dominates tech, arguably because our policy climate has favored innovation.”

Professor Yoo added context with a recent EU-commissioned report by former Italian Prime Minister Mario Draghi. The report delivered a brutal assessment: Europe’s productivity and innovation rates have fallen to roughly half of U.S. levels, with the EU’s own regulatory environment as a primary culprit. The report specifically cited laws like GDPR and the pending AI Act as innovation killers. (https://www.atlanticcouncil.org/blogs/new-atlanticist/draghis-new-report-on-european-competitiveness/#:~:text=is%20being%20left%20behind,merely%20regulates%20what%20others%20innovate).

The real-world impacts are already visible. U.S. companies are delaying or withdrawing AI products from Europe due to regulatory uncertainty. Google postponed launching its Bard AI chatbot in the EU because of privacy regulator concerns. Meta disabled AI features on products like smart glasses, not for technical reasons, but because the legal environment was too unpredictable (https://cepa.org/article/europes-ai-blues-us-companies-slow-deployment/#:~:text=Privacy%20rules%20represent%20a%20crucial,only%20available%20in%20the%20US). Meta’s spokesman called the EU regulator’s decision “a step backwards for European innovation.”

When rules are unclear or overly restrictive, companies simply geo-fence their innovations away from those markets. Europe risks becoming a tech backwater where new AI features arrive last, if at all.

Since writing this, the EU AI Act’s rollout has grown even more uncertain: the critical Code of Practice has been delayed, technical guidance is still pending, and policymakers are openly debating whether to pause or amend the law amid mounting concerns from industry. With Apple and Meta already withholding key AI products from Europe, and no new launches announced, the regulatory climate remains a major barrier and leaves European consumers and businesses waiting as the rest of the world moves ahead.

The philosophical difference runs deep. As Professor Yoo explained, Europe tends toward “ex-ante” regulation (to prevent potential problems before they happen) while The U.S. favors ‘ex-post’ approaches that address real harms after they emerge. For simple, stable technologies, preventive rules can work. But AI evolves rapidly and behaves probabilistically. Drawing hard lines too early risks outlawing useful innovations while missing entirely new forms of risk.

I found myself invoking what Jessica called “regulatory humility.” No one, not even the world’s top AI experts, can predict exactly where this technology heads in five years. Expecting Congress to freeze that uncertainty into legislation asks the impossible. Better to enforce existing laws against fraud and discrimination as problems arise, crafting targeted fixes for clearly demonstrated harms.

When Acquisitions Actually Help Innovation

The mood in the room shifted when we reached mergers and acquisitions. I could sense some skepticism about defending Big Tech’s acquisition appetite, so I decided to tell them about a company I had worked at early in my career, Next Computer,  and how I saw over the years the benefits of M&A for small companies and ultimately for consumers.

Back in the 1990s, NeXT was building innovative computer platforms but struggling to find market traction and running low on funding. When Apple (itself struggling at the time) acquired NeXT, many people might have seen it as another example of a big company swallowing a smaller competitor.

But, that acquisition became the foundation for Mac OS X, iOS, and basically all of Apple’s modern software architecture. And of course, it brought Steve Jobs back to Apple, which revitalized the entire company. The world might never have seen the iPhone or iPad if NeXT hadn’t found that lifeline through acquisition.

“If NeXT had just withered away or never found a buyer,” I told the room, “the technology you’re probably holding in your hand right now might not exist.”

Jessica expanded on this with the bigger picture. In the U.S., we’ve built what she called a “virtuous cycle” of innovation that depends partly on the M&A pathway. Venture capitalists invest in startups knowing there’s a potential big payoff, often through acquisition by a company that can scale the innovation globally.

The math is simple: if you pitch investors on your startup, you must explain your exit strategy. No rational investor funds a company that says “we’ll never be acquired and never go public.” With IPOs becoming harder and more expensive after regulations like Sarbanes-Oxley, acquisition often becomes the logical outcome.

Many founders love building new products but have no interest in scaling companies to thousands of employees. Large firms excel at scaling but need fresh ideas. Acquisitions marry these strengths. The innovative team gets to cash out and often start another venture, while the larger company gets technology and talent to take global.

Professor Yoo cited research showing 95% of vertical mergers (Vertical mergers involve companies buying complementary businesses) were either pro-consumer or neutral, with only 5% potentially harmful. The default assumption that “big company buying smaller company equals bad” simply isn’t supported by evidence.

He also shared Angela Merkel’s line contrasting innovation mindsets: “In the U.S., everything that’s not forbidden is permitted. In Europe, everything that’s not permitted is forbidden.” That philosophy explains why America has built a much more dynamic tech ecosystem.

There were murmurs and heads nodding knowingly. There may have been skeptics in the room but they were quiet.

We weren’t arguing every merger is good. Some deserve scrutiny. But blanket hostility toward acquisitions, or proposals to ban startups sales outright, would likely backfire by drying up venture funding and stranding innovations in companies without the resources to scale them globally.

How David Actually Needs Goliath

One of the more counterintuitive points we wanted to make was how small AI companies actually depend on the big platforms. The “Big Tech versus startups” framing misses the symbiosis that makes modern innovation possible.

When our moderator asked whether a hypothetical five-person fintech startup could “build its own full tech stack” without using major cloud providers, I laughed. “Anything is possible, but I can’t imagine the business model that makes it reasonable.”

The costs would be astronomical. Running modern AI at scale requires massive computing power, data storage, security infrastructure, and compliance systems. A startup would burn through tens of millions of dollars just setting up what Amazon Web Services or Microsoft Azure provides as a service for a few hundred dollars a month.

“Thanks to cloud platforms and open-source libraries,” I explained, “I could whip out my laptop right now and in minutes build an application that would blow you away. Building an application and getting it to market is much faster because of the infrastructure that exists now.”

Andrei reinforced this with his startup’s story. NLX landed major corporate clients as a small, unfunded startup by leveraging cloud platforms. “Should you build everything yourself? Probably not,” he said. Without cloud APIs and hosting, reaching enterprise scale would have taken years longer and vastly more capital.

Professor Yoo put this in economic terms every business faces: what to build in-house versus outsource. He gave a Capitol Hill example that got some chuckles: “Does the Senate hire its own cleaning staff and HVAC engineers, or contract that out?” Usually, outsourcing wins because specialists do it better and cheaper.

The same logic applies to a startup deciding whether to build their own data center or use Amazon’s. It’s usually a no-brainer to use the established platform. Big providers like AWS aren’t killing small companies; they’re enabling them.

For regulators, this interconnection matters. Policies that punish big “gatekeeper” companies for offering integrated services might hurt the small developers who depend on those services. There’s a nuanced line between ensuring fair access and breaking the platforms that make innovation accessible to everyone.

The Thousand-Bill Nightmare

About 45 minutes into our session, the conversation turned to what Jessica called “the patchwork problem.” A staffer asked about the flood of state AI legislation, and I could see genuine concern on faces around the room.

“Over 1,000 AI bills have been introduced across various states,” Jessica said. There were almost 700 introduced in 2024, and in 2025 the numbers are accelerating. People were doing mental math on what that meant for companies trying to operate nationally.

I walked through brutal economics. If 15 states each have different AI requirements, a small company faces two impossible choices: withdraw from some markets (don’t offer services in states with onerous rules) or comply with all by defaulting to the strictest requirements everywhere. The latter approach means treating every customer as if they live in the most regulated state, which drives up costs and might dumb down the service. Even that might not work if state laws actually conflict rather than just stack up. They may end up having what amounts to 15 different versions of their software at some level.

“The reality is many startups will probably just say, ‘we can’t expand until this stabilizes,'” I told them. “That uncertainty kills growth.”

Jessica drove home the irony: this patchwork actually benefits the incumbent giants Congress claims to worry about. Large firms can afford 50-state legal compliance; startups cannot. A thicket of conflicting state rules would “accidentally favor the incumbents.”

The solution seemed obvious: federal action to create consistent national policy. Yet Congress has struggled to act. A proposed 10-year moratorium on state AI laws was swiftly rejected by the Senate in a 99-1 vote after bipartisan backlash from governors and attorneys general. The result? States continue racing ahead with their own regulations.

In 2025 alone, states like Arkansas, Kentucky, Maryland, Montana, Utah, and West Virginia have passed new AI statutes, adding to the growing patchwork. California’s SB 1001 requires disclosure of AI chatbots, Colorado mandates algorithmic impact assessments, and Illinois expanded its biometric privacy law to cover AI systems. Each state takes a different approach, creating a compliance nightmare for companies operating nationally. No major new federal preemption or harmonizing action has emerged, so for now, companies must continue to navigate a rapidly expanding and fragmented state regulatory landscape.

Even OpenAI’s Sam Altman has testified that a patchwork of inconsistent rules would “significantly impair” AI progress in America. Despite industry concerns, federal preemption appears unlikely in the near term, leaving companies to navigate an increasingly complex state regulatory landscape. (https://natlawreview.com/article/us-house-representatives-advance-unprecedented-10-year-moratorium-state-ai-laws#:~:text=Industry%20Perspective). The momentum seemed to be building for federal preemption.

But I wanted to clarify something that seemed to confuse some people in the room. “Startups actually want clear rules to follow,” I said. “The narrative that tech entrepreneurs just want total freedom isn’t accurate. Every founder I mentor wants to know the guardrails. What kills us is uncertainty and inconsistency.”

Give us light-touch federal standards and we’ll comply gladly and get back to innovating. But 50 different moving targets? That’s a recipe for startups either fleeing to friendlier countries or never getting started. 

The Open Source Complexity

During the Q&A period, someone raised a question about open-source AI that revealed how even well-intentioned policies can miss the mark. The EU’s AI Act plans to exempt open-source models from many regulations, assuming they’re inherently safer because the code is publicly available.

“Is open-source really safer?” the questioner asked.

Professor Yoo and I exchanged glances. This assumption needed unpacking.

“Open-source has huge benefits,” I acknowledged. “It fosters collaboration, democratizes access to AI tools, and spurs innovation outside big corporate labs.” But transparency doesn’t automatically equal safety.

I explained the spectrum of what “open source” actually means in AI. Some groups release model weights (the trained parameters) but not the training data or training process. Meta’s Llama 2 model was hailed as “open source” because its neural weights are downloadable, but the training data remains largely mysterious.

“Recently a model that was labeled open-source was tested,” I said, referring to analysis of DeepSeek. “Despite being called ‘open,’ it failed every security test it was put through” (https://blogs.cisco.com/security/evaluating-security-risk-in-deepseek-and-other-frontier-reasoning-models). “Yet under certain policy proposals, it might get regulatory exemptions just because of that ‘open’ label.”

The broader lesson: regulators shouldn’t use crude open versus closed distinctions as proxies for safety. Closed models like GPT-4 lack transparency, which creates accountability challenges. But open models can be taken by anyone, including bad actors, and deployed without monitoring. When vulnerabilities exist in open models, they’re visible to attackers until fixed.

Context and use matter more than licensing labels. An open-source model in a critical medical device should meet the same safety standards as a proprietary model in that device. A hobbyist playing with a closed-source model poses different risks than a company deploying it at massive scale.

The Human Element

As our session wrapped up, the conversation landed on what might be the most important factor: people. A congressional staffer asked about jobs, skills, and the ever-present problem of AI bias.

“AI is a tool,” I said, “and like any powerful tool, outcomes depend on the user.” Right now we have a situation where AI has become extremely easy to use (anyone can sign up for free chatbots online), but society hasn’t equipped people with the knowledge to use it well.

I told them about training corporate staff on ‘prompt engineering.’ People blame the AI when they get wrong answers, but usually they asked the question poorly. Garbage in, garbage out still applies.

Education is quickly becoming the critical infrastructure for the AI era. We need to build AI literacy from the ground up, long before college. Critical thinking, data literacy, and even the basics of prompt crafting are now essential skills, at least for this generation of large language models.

The bias conversation always gets tangled up in definitions. To most people, bias means prejudice. In AI, bias is also the statistical lean that lets a model learn from data. If you remove all bias, you are left with a random number generator, not an AI. We absolutely do not want systems with racial or gender bias, but we do need AI that is biased toward stopping for kids in a crosswalk. That gap in understanding creates real policy headaches. If a law says no bias, it could accidentally ban all functional AI.

I am never sure if this point lands. Terminology trips people up more than the technology itself. I hope we cleared it up. For example, we do not want AI that favors men in hiring, but we do want AI automobiles that favors stopping over running into pedestrians.

The real solution is to enforce existing anti-discrimination laws in the AI context. Companies using AI for hiring or lending should audit their systems regularly. We do not need a raft of new AI bias laws as much as we need to apply civil rights laws to these new tools.

Looking ahead, AI will raise the skill bar across industries. Rather than eliminating human work, it will shift us toward higher-order judgment, critical thinking, and the orchestration of complex workflows. Countries that invest in human capital and AI fluency will multiply their technological advantages. While we cannot predict with certainty what the world will look like in twenty years, we can plan for the next decade with a clear sense of the skills that will matter most.

Walking Out With Hope

As the session ended and people began gathering their papers, I felt something I hadn’t expected: genuine optimism. The staffers who stayed afterward to ask follow-up questions, the thoughtful nature of the discussion, the clear interest in getting policy right rather than just making political points.

Walking out into the humid D.C. afternoon, laptop bag once again slung over my shoulder, I reflected on what we’d accomplished. We’d bridged the knowledge gap between technologists and policymakers, at least for a few hours. We’d shown that smart regulation can coexist with innovation leadership. Most importantly, we’d demonstrated that democracy can handle complex technological challenges when people engage in good faith.

The path forward seemed clear: foster innovation through unified, light-touch rules; invest in education and R&D; enforce existing laws against fraud and discrimination even when AI is involved; and stay flexible as technology evolves. America has navigated transformative technologies before, from electricity to the internet. We can do it with AI too, maintaining both our global technology leadership and our commitment to individual freedom.

As an AI practitioner, I left Washington recommitted to being a bridge between these worlds. The story of AI is still being written, and sessions like this give me confidence we’ll write it wisely. Through dialogue, iteration, and a healthy respect for both innovation and human values, we can ensure the United States remains the place where the future is imagined and built, responsibly and for everyone’s benefit.

A QUESTION FOR YOU

Given that the US maintains its AI leadership through a delicate balance of light regulation, open acquisition pathways, and platform accessibility for startups, what specific actions is your organization taking today to ensure this ecosystem remains vibrant for the next generation of innovators, rather than inadvertently supporting policies that might protect your current position but ultimately weaken America’s competitive advantage?