216.696.8700

AI in the Financial System: How to Stay on the Right Side of SEC Scrutiny in 2026

February 25, 2026
NCAA

AI, robo‑advice, and advanced analytics now sit inside the core of how many firms make money, manage risk, and communicate with investors, so regulators expect the same level of discipline and truthfulness around AI that they expect for any other key business process. The practical goal in 2026 is to keep using AI where it genuinely helps, but to be able to explain it clearly, supervise it, and document it so that investors are not misled and client interests are not inadvertently sidelined.

Where AI and Securities Regulation Intersect

Securities regulators are currently focused on a few repeat AI-use patterns that are directly relevant to securities law.

  • AI engines that influence revenue and cost, such as systems that set prices, determine which customers to approve or decline, choose which prospects to target, or forecast demand, because these decisions flow straight into revenue, margins, and loss rates in the financials.
  • AI embedded in risk and security controls, such as tools that scan transactions for fraud, monitor for cyber threats, or flag operational anomalies, since failures here can lead to financial losses, regulatory issues, or data breaches that change the risk profile investors care about.
  • AI used in trading and portfolio decisions, including models that rank securities, generate trading signals, or power robo‑advisers that propose portfolios to retail clients, because those outputs shape how client money is allocated.
  • AI helping to produce investor‑facing content, including draft earnings scripts, investor presentations, website copy, and “AI‑powered” marketing, which shapes how investors understand the business, its edge, and its risk.

A fast‑moving example is AI’s role in financial reporting itself as detailed in The Wall Street Journal article AI Is Co-Writing Financial Reports. Here’s Why That Matters.” released on October 31, 2025. According to the article, ON Semiconductor’s Chief Financial Officer describes how the company now uses generative AI to draft large parts of its financial reports. Another example of a public company, Hewlett Packard Enterprise, is piloting its own large‑language‑model platform (a kind of artificial intelligence system that’s trained on huge amounts of text so it can understand and generate human‑like language) to generate first drafts of financial statements for Securities and Exchange Commission (SEC) filings and to help its investor‑relations team anticipate analyst questions. These moves illustrate why AI in reporting is now a disclosure and governance issue: it affects how quickly information reaches markets and how much of the voice investors hear is human.

When AI strongly influences areas such as pricing, underwriting, risk controls, portfolio construction, or the drafting of filings and investor communications, regulators treat it as part of the firm’s core operations and expect leadership to understand what the system does, what can go wrong, and how those risks are controlled and communicated.

When AI Triggers SEC Disclosure Duties

The basic test for disclosure is investor‑focused, drawing on TSC Industries, Inc. v. Northway, Inc., 426 U.S. 438 (1976), a Supreme Court decision: an AI‑related fact is material if there is a substantial likelihood that a reasonable investor would view it as significantly altering the total mix of information in deciding whether to buy, sell, or hold the securities. In practice, AI usually crosses that line when it is tightly tied to performance, risk, or strategy. Recent recommendations from the SEC’s Investor Advisory Committee confirm that AI‑related disclosures should be evaluated under this existing materiality standard rather than a new, AI‑specific test (U.S. Securities and Exchange Commission, Investor Advisory Committee, “Disclosure of Artificial Intelligence’s Impact on Operations” (Mar. 6, 2025)).

AI is material when it drives business outcomes, such as an underwriting model that largely determines which loans are approved and at what rate, or a pricing model that adjusts prices across a product line and meaningfully shifts revenue and margins. AI is material when it sits at the center of risk controls, so a model failure could trigger large errors, compliance breaches, or cyber incidents with financial and reputational consequences. And AI is material when it is presented as a core differentiator; when a firm brands itself as an “AI‑driven” asset manager or platform and investors are told that its edge rests on unique models and data.

Public companies are expected to capture these points in risk factors by explaining dependence on AI tools, data, and vendors, and how errors, bias, outages, regulatory shifts, or cyber issues involving AI could harm results or reputation. Business and governance disclosures should describe how AI fits into products and operations and who at the management and board level oversees AI strategy, risk, and vendors, while the section of the annual and quarterly SEC reports where management explains results and trends for investors should discuss how AI has affected, and is expected to affect, revenues, costs, and key performance trends, but without promising certainty or guaranteed savings.

What AI‑Washing Looks Like in Practice

The SEC has identified “AI‑washing” as a clear target: using AI buzzwords to make a business sound more advanced or lower‑risk than it is. In 2024, enforcement actions against Delphia (USA) Inc. and Global Predictions Inc. targeted registered investment advisers, alleging misleading statements about their use of AI and machine learning in violation of the antifraud provisions of the Investment Advisers Act of 1940 (including 15 U.S.C. § 80b‑6(2) and § 80b‑6(4)) and the SEC’s Marketing Rule under the Investment Advisers Act, codified at 17 C.F.R. § 275.206(4)‑1, which governs investment adviser advertising and generally prohibits untrue, unsubstantiated, or otherwise materially misleading promotional claims. In contrast, the 2025 Presto Automation settlement, involving a public company that overstated how advanced its “AI” product was, and the parallel SEC and Department of Justice actions against Nate Inc.’s founder over a shopping app that claimed far more automation than it actually had, proceeded under the federal securities laws’ general antifraud and disclosure framework for public issuers (including, for example, Securities Act of 1933 § 17(a)(2), 15 U.S.C. § 77q(a)(2); Exchange Act of 1934 § 10(b), 15 U.S.C. § 78j(b), and Rule 10b‑5, 17 C.F.R. § 240.10b‑5; as well as reporting and disclosure‑controls requirements), rather than under the Advisers Act and its Marketing Rule. Across these matters, the government focused on AI‑related claims about capabilities and “edge” that were not supported by reality and were repeated in websites, public filings, demos, emails, and pitch decks.

The conduct regulators highlighted is easy to visualize. One type of issue is claiming a status or capability that does not exist, such as calling a firm “the first regulated AI financial advisor” or touting “expert AI‑driven forecasts” when actual processes are basic or not in place. Another is describing the investment process as built on proprietary AI engines and client‑data‑driven models when most portfolio decisions are still made through conventional methods, manual judgment, or simple screens. A third is implying fully autonomous, self‑learning systems where human judgment still dominates, creating the impression of a stronger, more scalable edge than the firm actually has.

Beginning in 2025, the DOJ joined the SEC in pursuing AI‑washing cases, bringing parallel civil and criminal actions against companies and founders accused of misrepresenting the capabilities of their AI systems. A notable example came in April 2025, when the DOJ and SEC jointly charged Albert Saniger, the founder and former CEO of Nate Inc., an e-commerce shopping-app startup, alleging he raised more than $42 million by falsely claiming that Nate’s shopping app relied on advanced AI, machine learning, and neural networks to automate online transactions. According to the government, the app’s core “AI‑powered” automation was largely a fiction: transactions were actually being completed manually by overseas contract workers, while the company conducted deceptive product demonstrations to conceal the lack of real AI functionality. Securities and Exchange Commission v. Mantinan, No. 1:25‑cv‑02937 (S.D.N.Y. filed Apr. 9, 2025) and United States v. Saniger, No. 25‑cr‑00157 (S.D.N.Y. filed Apr. 9, 2025). This joint action marked a significant escalation in the federal government’s enforcement posture, signaling that misrepresentations about AI can now trigger both securities‑fraud liability and criminal exposure.

The risk here is straightforward because investors may give more weight, or be willing to pay higher fees, when they believe they are buying into genuinely advanced AI capabilities, and overstating those capabilities distorts those decisions. The SEC’s practical message is that firms can highlight AI, but must describe what it does in concrete, accurate terms, avoid embellishment, and support claims with documentation and testing.

Duties, conflicts, and supervising AI outputs

The legal duties that apply to advice do not change just because AI is in the loop. Under the Investment Advisers Act of 1940, investment advisers owe a fiduciary duty to act in the client’s best interest, align advice with the client’s goals and constraints, manage conflicts fairly, and disclose conflicts so clients can give informed consent. Regulation Best Interest, which applies to broker‑dealers when they recommend securities to retail customers, requires them to act in the customer’s best interest and not put their own interests ahead of the customer’s, backed by specific obligations around care, conflicts, disclosure, and compliance.

Regulators apply these frameworks to AI and robo‑advice by focusing on what the tools actually do. Trading signals and portfolio recommendations generated by models must still fit the client’s risk tolerance, time horizon, and objectives. Conflicts become a concern when models are designed or trained in ways that favor higher‑fee (for example, investments in mutual funds with a higher expense ratio), proprietary (products that the firm itself sponsors or manages, as opposed to third‑party products), or more complex products (such as structured notes or leveraged and inverse ETFs that are harder for typical clients to understand) that benefit the firm, so the system quietly steers clients toward options that are worse for them, with AI making it easier to embed such conflicts at scale.

Recent AI‑washing cases also show that AI‑generated outputs, such as recommendations, scores, classifications, and investor‑facing content, are part of what firms must supervise through their compliance programs. In practice, that includes testing AI‑generated recommendations and strategies against client profiles and investment guidelines to see whether outputs are consistent with stated approaches and risk limits. It also means reviewing AI‑assisted content, such as websites, emails, social posts, robo‑adviser interfaces, dashboards, and chatbots, through the same approval process as human‑drafted materials to prevent overstatements or misleading performance claims.

Firms are expected to keep records of the models they use, version histories, parameter changes, test results, incidents, and fixes, so they can show regulators that they monitor and improve AI over time rather than adopting a “set and forget” approach. Stress‑testing models against extreme but plausible scenarios, such as sharp rate changes or liquidity squeezes, helps reveal how the AI behaves in unusual markets, while monitoring for model drift (gradual changes in performance as data patterns shift), ensures the tools do not imperceptibly degrade or change behavior in ways that increase risk.

Regulatory focus, evolving rules, and practical next steps

Although the SEC had previously proposed rules targeting conflicts of interest arising from the use of predictive data analytics in broker‑dealer and adviser interactions, that proposal was formally withdrawn on June 12, 2025, as part of the Commission’s decision to retract fourteen outstanding rulemakings issued under the prior administration. Any future action in this area would require a new proposal and fresh comment process. The SEC’s examination agenda for 2026 confirms that AI remains a standing part of how the agency evaluates conduct, risk, and disclosure obligations, even as prior AI‑related rule proposals have been withdrawn. The SEC’s Division of Examinations (the office that conducts routine and risk‑based exams of broker‑dealers, advisers, and funds) in its 2026 Examination Priorities states that it “remains focused on registrants’ use of certain products and services, such as automated investment tools, AI technologies, and trading algorithms or platforms,” and that, with respect to AI, it will “review for accuracy registrant representations regarding their AI capabilities” and assess whether firms have adequate policies and procedures to supervise AI used for “fraud prevention and detection, back‑office operations, anti‑money laundering (AML), and trading functions,” as well as to manage information‑security risks associated with “artificial intelligence (AI) and polymorphic malware attacks.”

The SEC’s Investor Advisory Committee has recommended that the Commission “issue further AI related disclosure guidance applicable to issuers built‑upon a materiality‑informed framework,” under which “issuers’ materiality analysis should include factors such as the impact of AI on their growth, financial results, barriers to adoption, possible adverse developments and evolving competitive and regulatory landscapes and similar risks,” and “if material, report separately on how they are deploying AI and the effects of AI deployment on (a) internal business operations and (b) consumer facing matters” (SEC Investor Advisory Committee, “Recommendation Regarding the Disclosure of Artificial Intelligence’s Impact on Operations,” approved Dec. 4, 2025). For broker‑dealers, the Financial Industry Regulatory Authority (FINRA), the self‑regulatory organization that writes and enforces many day‑to‑day conduct rules for brokerage firms, has emphasized in Regulatory Notice 24‑09 that its rules are “technology neutral,” so firms using artificial intelligence and other advanced tools remain fully responsible for compliance with applicable supervision, suitability or Regulation Best Interest, communications, and books‑and‑records obligations, and must update their written supervisory procedures to address the use of AI rather than relying on vendors as a shield (FINRA Regulatory Notice 24‑09, “Sales Practice Obligations Related to Artificial Intelligence,” June 26, 2024).

In the early 2000s, large Wall Street investment banks often used their in‑house stock research analysts to help win and keep investment‑banking business. The same firms that were being paid large fees to underwrite stock offerings and advise on mergers and other deals for a company were also publishing supposedly “independent” research reports about that same company’s stock. Senior investment‑banking executives pressured analysts to keep issuing optimistic reports and “buy” ratings, even when the analysts’ real views were more negative, because overly positive research reports helped attract and retain that underwriting and advisory work. The 2003 Global Research Analyst Settlement was a one‑time agreement between regulators and several major banks that tried to break this pattern by requiring stronger “firewalls” between research and banking, changing how analysts could be paid and evaluated, and imposing significant monetary penalties to compensate investors and help fund independent research; but from the start it was a temporary, tailored fix aimed at a defined set of firms and abuses.

Over time, instead of relying on that one settlement forever, the SEC and the brokerage industry’s self-regulator replaced it with standing rules that apply to all firms and that spell out how research has to be produced, how analysts can be paid and supervised, and how conflicts with banking and trading must be managed on an ongoing basis, in a way that can adjust as markets and technology change. The reason to bring that history into the AI discussion is to show a pattern in how regulation matures: regulators usually start with targeted, case specific interventions when a new problem appears, and in AI that has meant early cases against particular advisers and companies whose marketing overstated their AI capabilities, but the trajectory is toward broader, technology neutral conduct and disclosure standards. AI is partway through that transition now, with both high‑profile, one‑off enforcement actions and emerging proposals and examination priorities that aim to build stable expectations for how firms can use AI and how they must describe it as models and business models evolve.

For 2026, the practical message is to treat AI like any other core business system that affects money, risk, or investors, and to manage it in a structured, disclosure‑aware way. Firms should maintain a current inventory of AI tools that influence financial results, risk controls, or investor communications and clearly document what each tool does, how it is used, and who is responsible for it. They should make sure SEC filings and other public materials describe those tools in plain, accurate terms, using existing items such as business, risk factors, and the section where management explains results and trends, and avoid “AI‑washing” by not suggesting that their technologies are more autonomous, scalable, or commercially mature than they really are and by clearly defining what they mean by AI and how it affects results, risks, and prospects. AI needs to be built into the same governance, risk, and compliance framework that applies to other important systems, which means senior managers and the board know which AI tools the firm is using and who is accountable for them, the firm regularly tests those tools to see how they behave and whether they are making mistakes, drifting over time, embedding bias, or creating conflicts of interest, and the firm keeps track of which models and versions are in use and keeps evidence of the tests it ran, the problems it found, and how those problems were fixed. Finally, firms should train both client-facing and technical staff to describe AI in accurate terms and to recognize when AI driven outputs raise suitability, conflict, or disclosure issues, so that AI strengthens the business and supports clear, consistent SEC reporting instead of becoming a hidden regulatory and litigation liability.

If AI plays a meaningful role in your revenue, risk controls, or investor communications, now is the time to pressure test how it is governed, described, and disclosed. Our team works with public companies, advisers, and financial platforms to help align AI use with existing securities, disclosure, and fiduciary obligations. To discuss further, please contact KJK Corporate & Securities co-chair Christopher Hubbert (CJH@kjk.com; 216.736.7215).