MarketsFinancial TimesApr 28, 2026· 1 min read
Goldman Sachs Restricts AI Access for Hong Kong Bankers

Goldman Sachs has restricted its Hong Kong bankers from accessing Anthropic's Claude AI models, citing unspecified concerns. This move underscores the financial sector's cautious approach to integrating AI, driven by data privacy and regulatory compliance.
Goldman Sachs has implemented restrictions preventing its bankers in Hong Kong from accessing AI models developed by Anthropic, including its Claude large language model. The policy, which has been in effect for several weeks, marks a notable shift in the investment bank's internal technology usage and data security protocols within the region.
While the specific reasons for the restriction were not publicly detailed, such moves by major financial institutions often stem from concerns regarding data privacy, intellectual property protection, and regulatory compliance. The use of generative AI tools, while offering potential efficiency gains, also introduces complex challenges related to proprietary information leakage and the accuracy and bias of AI-generated content.
This development underscores the broader cautious approach many financial sector firms are taking toward integrating nascent AI technologies, particularly in sensitive operational areas. The financial industry is heavily regulated, and the adoption of new technologies must align with stringent data governance frameworks and client confidentiality requirements. Goldman Sachs's decision in Hong Kong could reflect a localized regulatory interpretation or an internal risk assessment specific to the operational environment there.
The restriction is likely part of a global effort by financial institutions to establish clear internal guidelines and robust safeguards before widespread deployment of external AI tools. It highlights the ongoing tension between technological innovation and the paramount need for security and compliance in a highly competitive and regulated industry.
Analyst's Take
While this appears as a localized IT policy, it could signal growing unease among global financial regulators regarding the secure and compliant use of third-party AI, potentially foreshadowing broader industry-wide directives or stricter data residency requirements for AI processing. The timing, predating anticipated global AI regulations, suggests proactive internal risk management and could influence other major banks' AI adoption strategies in APAC.