The rollout of the Artificial Intelligence Basic Act in January 2026 carries a grace period to address expected “readiness gaps”
When its national AI framework takes effect on January 22, 2026, South Korea will become the first country in the world to implement comprehensive AI legislation months ahead of the European Union’s AI Act, which will be enforced from August 2026.
The move positions Seoul as a regulatory trailblazer, but has sparked concern among the country’s fast‑growing startup ecosystem over limited preparation time and unclear compliance standards.
The Framework Act on Artificial Intelligence Development and Establishment of a Foundation for Trustworthiness, enacted in January 2025, introduced a set of national requirements governing AI safety, transparency, and accountability, mandating:
- the creation of a government‑appointed AI committee
- the adoption of a three‑year policy blueprint
- the imposition of disclosure and notification obligations for certain high‑risk systems.
- a tiered, risk‑based approach similar to the EU model, with the most stringent rules reserved for high‑impact AI deployed in sectors such as healthcare, education, and public administration
A December 2025 survey of 101 domestic AI firms conducted by the Startup Alliance saw 98% of respondents citing they were not yet ready to comply. Nearly half said indicated they were unaware of the specific provisions, while another 48.5% had admitted familiarity but insufficient readiness. The major concerns cited were: ambiguities around the definition of “generative AI”; the scope of notification requirements; and criteria for high‑impact classification.
An official from the Korea Internet Corporations Association has told local media that many startups would face “overwhelming” compliance challenges because the final enforcement decree is not expected until shortly before the January rollout.
Industry observers have warned that some smaller firms in the nation may temporarily suspend or relocate operations, with Japan emerging as a preferred alternative due to its lighter, voluntary governance model.
One of the most contentious provisions makes it mandatory to watermark AI‑generated content to prevent misinformation and deepfakes. Although intended to promote transparency, critics argue that labeling outputs as “AI‑generated” could stigmatize legitimate products and dampen consumer trust.
The nation’s Ministry of Science and ICT has responded by pledging a one‑year grace period before fines are imposed. Science Minister Bae Kyunghoon has described the enforcement decree as a “cornerstone” for transforming South Korea into a “top‑three global AI powerhouse.”