|  | 
         
            |  | 
         
            |  | A BILL TO BE ENTITLED | 
         
            |  | AN ACT | 
         
            |  | relating to the regulation and reporting on the use of artificial | 
         
            |  | intelligence systems by certain business entities and state | 
         
            |  | agencies; providing civil penalties. | 
         
            |  | BE IT ENACTED BY THE LEGISLATURE OF THE STATE OF TEXAS: | 
         
            |  | SECTION 1.  This Act may be cited as the Texas Responsible | 
         
            |  | Artificial Intelligence Governance Act | 
         
            |  | SECTION 2.  Title 11, Business & Commerce Code, is amended by | 
         
            |  | adding Subtitle D to read as follows: | 
         
            |  | SUBTITLE D.  ARTIFICIAL INTELLIGENCE PROTECTION | 
         
            |  | CHAPTER 551.  ARTIFICIAL INTELLIGENCE PROTECTION | 
         
            |  | SUBCHAPTER A.  GENERAL PROVISIONS | 
         
            |  | Sec. 551.001.  DEFINITIONS.  In this chapter: | 
         
            |  | (1)  "Algorithmic discrimination" means any condition | 
         
            |  | in which an artificial intelligence system when deployed creates an | 
         
            |  | unlawful discrimination of a protected classification in violation | 
         
            |  | of the laws of this state or federal law. | 
         
            |  | (A)  "Algorithmic discrimination" does not | 
         
            |  | include the offer, license, or use of a high-risk artificial | 
         
            |  | intelligence system by a developer or deployer for the sole purpose | 
         
            |  | of the developer's or deployer's self-testing, for a non-deployed | 
         
            |  | purpose, to identify, mitigate, or prevent discrimination or | 
         
            |  | otherwise ensure compliance with state and federal law. | 
         
            |  | (2)  "Artificial intelligence system" means the use of | 
         
            |  | machine learning and related technologies that use data to train | 
         
            |  | statistical models for the purpose of enabling computer systems to | 
         
            |  | perform tasks normally associated with human intelligence or | 
         
            |  | perception, such as computer vision, speech or natural language | 
         
            |  | processing, and content generation. | 
         
            |  | (3)  "Biometric identifier" means a retina or iris | 
         
            |  | scan, fingerprint, voiceprint, or record of hand or face geometry. | 
         
            |  | (4)  "Council" means the Artificial Intelligence | 
         
            |  | Council established under Chapter 553. | 
         
            |  | (5)  "Consequential decision" means any decision that | 
         
            |  | has a material, legal, or similarly significant, effect on a | 
         
            |  | consumer's access to, cost of, or terms or conditions of: | 
         
            |  | (A)  a criminal case assessment, a sentencing or | 
         
            |  | plea agreement analysis, or a pardon, parole, probation, or release | 
         
            |  | decision; | 
         
            |  | (B)  education enrollment or an education | 
         
            |  | opportunity; | 
         
            |  | (C)  employment or an employment opportunity; | 
         
            |  | (D)  a financial service; | 
         
            |  | (E)  an essential government service; | 
         
            |  | (F)  residential utility services; | 
         
            |  | (G)  a health-care service or treatment; | 
         
            |  | (H)  housing; | 
         
            |  | (I)  insurance; | 
         
            |  | (J)  a legal service; | 
         
            |  | (K)  a transportation service; | 
         
            |  | (L)  constitutionally protected services or | 
         
            |  | products; or | 
         
            |  | (M)  elections or voting process. | 
         
            |  | (6)  "Consumer" means an individual who is a resident | 
         
            |  | of this state acting only in an individual or household context. | 
         
            |  | The term does not include an individual acting in a commercial or | 
         
            |  | employment context. | 
         
            |  | (7)  "Deploy" means to put into effect or | 
         
            |  | commercialize. | 
         
            |  | (8)  "Deployer" means a person doing business in this | 
         
            |  | state that deploys a high-risk artificial intelligence system. | 
         
            |  | (9)  "Developer" means a person doing business in this | 
         
            |  | state that develops a high-risk artificial intelligence system or | 
         
            |  | substantially or intentionally modifies an artificial intelligence | 
         
            |  | system. | 
         
            |  | (10)  "Digital service" means a website, an | 
         
            |  | application, a program, or software that collects or processes | 
         
            |  | personal identifying information with Internet connectivity. | 
         
            |  | (11)  "Digital service provider" means a person who: | 
         
            |  | (A)  owns or operates a digital service; | 
         
            |  | (B)  determines the purpose of collecting and | 
         
            |  | processing the personal identifying information of users of the | 
         
            |  | digital service; and | 
         
            |  | (C)  determines the means used to collect and | 
         
            |  | process the personal identifying information of users of the | 
         
            |  | digital service. | 
         
            |  | (12)  "Distributor" means a person, other than the | 
         
            |  | Developer, that makes an artificial intelligence system available | 
         
            |  | in the market for a commercial purpose. | 
         
            |  | (13)  "Generative artificial intelligence" means | 
         
            |  | artificial intelligence models that can emulate the structure and | 
         
            |  | characteristics of input data in order to generate derived | 
         
            |  | synthetic content.  This can include images, videos, audio, text, | 
         
            |  | and other digital content. | 
         
            |  | (14)  "High-risk artificial intelligence system" means | 
         
            |  | any artificial intelligence system that is a substantial factor to | 
         
            |  | a consequential decision.  The term does not include: | 
         
            |  | (A)  an artificial intelligence system if the | 
         
            |  | artificial intelligence system is intended to detect | 
         
            |  | decision-making patterns or deviations from prior decision-making | 
         
            |  | patterns and is not intended to replace or influence a previously | 
         
            |  | completed human assessment without sufficient human review; | 
         
            |  | (B)  an artificial intelligence system that | 
         
            |  | violates a provision of Subchapter B; or | 
         
            |  | (C)  the following technologies, unless the | 
         
            |  | technologies, when deployed, make, or are a substantial factor in | 
         
            |  | making, a consequential decision: | 
         
            |  | (i)  anti-malware; | 
         
            |  | (ii)  anti-virus; | 
         
            |  | (iii)  calculators; | 
         
            |  | (iv)  cybersecurity; | 
         
            |  | (v)  databases; | 
         
            |  | (vi)  data storage; | 
         
            |  | (vii)  firewall; | 
         
            |  | (viii)  fraud detection systems; | 
         
            |  | (ix)  internet domain registration; | 
         
            |  | (x)  internet website loading; | 
         
            |  | (xi)  networking; | 
         
            |  | (xii)  operational technology; | 
         
            |  | (xiii)  spam- and robocall-filtering; | 
         
            |  | (xiv)  spell-checking; | 
         
            |  | (xv)  spreadsheets; | 
         
            |  | (xvi)  web caching; | 
         
            |  | (xvii)  web scraping; | 
         
            |  | (xviii)  web hosting or any similar | 
         
            |  | technology; or | 
         
            |  | (xviv)  any technology that solely | 
         
            |  | communicates in natural language for the sole purpose of providing | 
         
            |  | users with information, making referrals or recommendations | 
         
            |  | relating to customer service, and answering questions and is | 
         
            |  | subject to an acceptable use policy that prohibits generating | 
         
            |  | content that is discriminatory or harmful, as long as the system | 
         
            |  | does not violate any provision listed in Subchapter B. | 
         
            |  | (15)  "Open source artificial intelligence system" | 
         
            |  | means an artificial intelligence system that: | 
         
            |  | (A)  can be used or modified for any purpose | 
         
            |  | without securing permission from the owner or creator of such an | 
         
            |  | artificial intelligence system; | 
         
            |  | (B)  can be shared for any use with or without | 
         
            |  | modifications; and | 
         
            |  | (C)  includes information about the data used to | 
         
            |  | train such system that is sufficiently detailed such that a person | 
         
            |  | skilled in artificial intelligence could create a substantially | 
         
            |  | equivalent system when the following are made available freely or | 
         
            |  | through a non-restrictive license: | 
         
            |  | (i)  the same or similar data; | 
         
            |  | (ii)  the source code used to train and run | 
         
            |  | such system; and | 
         
            |  | (iii)  the model weights and parameters of | 
         
            |  | such system. | 
         
            |  | (16)  "Operational technology" means hardware and | 
         
            |  | software that detects or causes a change through the direct | 
         
            |  | monitoring or control of physical devices, processes, and events in | 
         
            |  | the enterprise. | 
         
            |  | (17)  "Personal data" has the meaning assigned to it by | 
         
            |  | Section 541.001, Business and Commerce Code. | 
         
            |  | (18)  "Risk" means the composite measure of an event's | 
         
            |  | probability of occurring and the magnitude or degree of the | 
         
            |  | consequences of the corresponding event. | 
         
            |  | (19)  "Sensitive personal attribute" means race, | 
         
            |  | political opinions, religious or philosophical beliefs, ethnic | 
         
            |  | orientation, mental health diagnosis, or sex.  The term does not | 
         
            |  | include conduct that would be classified as an offense under | 
         
            |  | Chapter 21, Penal Code. | 
         
            |  | (20)  "Social media platform" has the meaning assigned | 
         
            |  | by Section 120.001, Business and Commerce Code. | 
         
            |  | (21)  "Substantial factor" means a factor that is: | 
         
            |  | (A)  considered when making a consequential | 
         
            |  | decision; | 
         
            |  | (B)  likely to alter the outcome of a | 
         
            |  | consequential decision; and | 
         
            |  | (C)  weighed more heavily than any other factor | 
         
            |  | contributing to the consequential decision. | 
         
            |  | (22)  "Intentional and substantial modification" or | 
         
            |  | "Substantial modification" means a deliberate change made to an | 
         
            |  | artificial intelligence system that reasonably increases the risk | 
         
            |  | of algorithmic discrimination. | 
         
            |  | Sec. 551.002.  APPLICABILITY OF CHAPTER.  This chapter | 
         
            |  | applies only to a person that is not a small business as defined by | 
         
            |  | the United States Small Business Administration, and: | 
         
            |  | (1)  conducts business, promotes, or advertises in this | 
         
            |  | state or produces a product or service consumed by residents of this | 
         
            |  | state; or | 
         
            |  | (2)  engages in the development, distribution, or | 
         
            |  | deployment of a high-risk artificial intelligence system in this | 
         
            |  | state. | 
         
            |  | Sec. 551.003.  DEVELOPER DUTIES.  (a)  A developer of a | 
         
            |  | high-risk artificial intelligence system shall use reasonable care | 
         
            |  | to protect consumers from any known or reasonably foreseeable risks | 
         
            |  | of algorithmic discrimination arising from the intended and | 
         
            |  | contracted uses of the high-risk artificial intelligence system. | 
         
            |  | (b)  Prior to providing a high-risk artificial intelligence | 
         
            |  | system to a deployer, a developer shall provide to the deployer, in | 
         
            |  | writing, a High-Risk Report that consists of: | 
         
            |  | (1)  a statement describing how the high-risk | 
         
            |  | artificial intelligence system should be used or not be used; | 
         
            |  | (2)  any known limitations of the system that could | 
         
            |  | lead to algorithmic discrimination, the metrics used to measure the | 
         
            |  | system's performance, which shall include at a minimum, metrics | 
         
            |  | related to accuracy, explainability, transparency, reliability, | 
         
            |  | and security set forth in the most recent version of the "Artificial | 
         
            |  | Intelligence Risk Management Framework: Generative Artificial | 
         
            |  | Intelligence Profile" published by the National Institute of | 
         
            |  | Standards and Technology, and how the system performs under those | 
         
            |  | metrics in its intended use contexts; | 
         
            |  | (3)  any known or reasonably foreseeable risks of | 
         
            |  | algorithmic discrimination, arising from its intended or likely | 
         
            |  | use; | 
         
            |  | (4)  a high-level summary of the type of data used to | 
         
            |  | program or train the high-risk artificial intelligence system; | 
         
            |  | (5)  the data governance measures used to cover the | 
         
            |  | training datasets and their collection, and the measures used to | 
         
            |  | examine the suitability of data sources and prevent unlawful | 
         
            |  | discriminatory biases; and | 
         
            |  | (6)  appropriate principles, processes, and personnel | 
         
            |  | for the deployers' risk management policy. | 
         
            |  | (c)  If a high-risk artificial intelligence system is | 
         
            |  | intentionally or substantially modified after a developer provides | 
         
            |  | it to a deployer, a developer shall make necessary information in | 
         
            |  | subsection (b) available to deployers within 30 days of the | 
         
            |  | modification. | 
         
            |  | (d)  If a developer believes or has reason to believe, that | 
         
            |  | it deployed a high-risk artificial intelligence system that does | 
         
            |  | not comply with a requirement of this chapter, the developer shall | 
         
            |  | immediately take the necessary corrective actions to bring that | 
         
            |  | system into compliance, including by withdrawing it, disabling it, | 
         
            |  | and recalling it, as appropriate.  Where applicable, the developer | 
         
            |  | shall inform the distributors or deployers of the high-risk | 
         
            |  | artificial intelligence system concerned. | 
         
            |  | (e)  Where the high-risk artificial intelligence system | 
         
            |  | presents risks of algorithmic discrimination, unlawful use or | 
         
            |  | disclosure of personal data, or deceptive manipulation or coercion | 
         
            |  | of human behavior and the developer knows or should reasonably know | 
         
            |  | of that risk, it shall immediately investigate the causes, in | 
         
            |  | collaboration with the deployer, where applicable, and inform the | 
         
            |  | attorney general in writing of the nature of the non-compliance and | 
         
            |  | of any relevant corrective action taken. | 
         
            |  | (f)  Developers shall keep detailed records of any | 
         
            |  | generative artificial intelligence training data used to develop a | 
         
            |  | generative artificial intelligence system or service, consistent | 
         
            |  | with the suggested actions under GV-1.2-007 of the "Artificial | 
         
            |  | Intelligence Risk Management Framework: Generative Artificial | 
         
            |  | Intelligence Profile" by the National Institute of Standards and | 
         
            |  | Technology, or any subsequent versions thereof. | 
         
            |  | Sec. 551.004.  DISTRIBUTOR DUTIES.  A distributor of a | 
         
            |  | high-risk artificial intelligence system shall use reasonable care | 
         
            |  | to protect consumers from any known or reasonably foreseeable risks | 
         
            |  | of algorithmic discrimination.  If a distributor of a high-risk | 
         
            |  | artificial intelligence system knows or has reason to know that a | 
         
            |  | high-risk artificial intelligence system is not in compliance with | 
         
            |  | any requirement in this chapter, it shall immediately withdraw, | 
         
            |  | disable, or recall as appropriate, the high-risk artificial | 
         
            |  | intelligence system from the market until the system has been | 
         
            |  | brought into compliance with the requirements of this chapter.  The | 
         
            |  | distributor shall inform the developers of the high-risk artificial | 
         
            |  | intelligence system concerned and, where applicable, the | 
         
            |  | deployers. | 
         
            |  | Sec. 551.005.  DEPLOYER DUTIES.  A deployer of a high-risk | 
         
            |  | artificial intelligence system shall use reasonable care to protect | 
         
            |  | consumers from any known or reasonably foreseeable risks of | 
         
            |  | algorithmic discrimination.  If a deployer of a high-risk | 
         
            |  | artificial intelligence system knows or has reason to know that a | 
         
            |  | high-risk artificial intelligence system is not in compliance with | 
         
            |  | any requirement in this chapter, it shall immediately suspend the | 
         
            |  | use of the high-risk artificial intelligence system from the market | 
         
            |  | until the system has been brought into compliance with the | 
         
            |  | requirements of this chapter.  The deployer shall inform the | 
         
            |  | developers of the high-risk artificial intelligence system | 
         
            |  | concerned and, where applicable, the distributors. | 
         
            |  | Sec. 551.006.  IMPACT ASSESSMENTS.  (a)  A deployer that | 
         
            |  | deploys a high-risk artificial intelligence system shall complete | 
         
            |  | an impact assessment for the high-risk artificial intelligence | 
         
            |  | system.  A deployer, or a third-party contracted by the deployer for | 
         
            |  | such purposes, shall complete an impact assessment annually and | 
         
            |  | within ninety days after any intentional and substantial | 
         
            |  | modification to the high-risk artificial intelligence system is | 
         
            |  | made available.  An impact assessment must include, at a minimum, | 
         
            |  | and to the extent reasonably known by or available to the deployer: | 
         
            |  | (1)  a statement by the deployer disclosing the | 
         
            |  | purpose, intended use cases, and deployment context of, and | 
         
            |  | benefits afforded by, the high-risk artificial intelligence | 
         
            |  | system; | 
         
            |  | (2)  an analysis of whether the deployment of the | 
         
            |  | high-risk artificial intelligence system poses any known or | 
         
            |  | reasonably foreseeable risks of algorithmic discrimination and, if | 
         
            |  | so, the nature of the algorithmic discrimination and the steps that | 
         
            |  | have been taken to mitigate the risks; | 
         
            |  | (3)  a description of the categories of data the | 
         
            |  | high-risk artificial intelligence system processes as inputs and | 
         
            |  | the outputs the high-risk artificial intelligence system produces; | 
         
            |  | (4)  if the deployer used data to customize the | 
         
            |  | high-risk artificial intelligence system, an overview of the | 
         
            |  | categories of data the deployer used to customize the high-risk | 
         
            |  | artificial intelligence system; | 
         
            |  | (5)  any metrics used to evaluate the performance and | 
         
            |  | known limitations of the high-risk artificial intelligence system; | 
         
            |  | (6)  a description of any transparency measures taken | 
         
            |  | concerning the high-risk artificial intelligence system, including | 
         
            |  | any measures taken to disclose to a consumer that the high-risk | 
         
            |  | artificial intelligence system will be used; | 
         
            |  | (7)  a description of the post-deployment monitoring | 
         
            |  | and user safeguards provided concerning the high-risk artificial | 
         
            |  | intelligence system, including the oversight, use, and learning | 
         
            |  | process established by the deployer to address issues arising from | 
         
            |  | the deployment of the high-risk artificial intelligence system; and | 
         
            |  | (8)  a description of cybersecurity measures and threat | 
         
            |  | modeling conducted on the system. | 
         
            |  | (b)  Following an intentional and substantial modification | 
         
            |  | to a high-risk artificial intelligence system, a deployer must | 
         
            |  | disclose the extent to which the high-risk artificial intelligence | 
         
            |  | system was used in a manner that was consistent with, or varied | 
         
            |  | from, the developer's intended uses of the high-risk artificial | 
         
            |  | intelligence system. | 
         
            |  | (c)  A single impact assessment may address a comparable set | 
         
            |  | of high-risk artificial intelligence systems deployed by a | 
         
            |  | deployer. | 
         
            |  | (d)  A deployer shall maintain the most recently completed | 
         
            |  | impact assessment for a high-risk artificial intelligence system, | 
         
            |  | all records concerning each impact assessment, and all prior impact | 
         
            |  | assessments, if any, for at least three years following the final | 
         
            |  | deployment of the high-risk artificial intelligence system. | 
         
            |  | (e)  If a deployer, or a third party contracted by the | 
         
            |  | deployer, completes an impact assessment for the purpose of | 
         
            |  | complying with another applicable law or regulation, such impact | 
         
            |  | assessment shall be deemed to satisfy the requirements established | 
         
            |  | in this subsection if such impact assessment is reasonably similar | 
         
            |  | in scope and effect to the impact assessment that would otherwise be | 
         
            |  | completed pursuant to this subsection. | 
         
            |  | (f)  A deployer may redact any trade secrets as defined by | 
         
            |  | Section 541.001(33), Business & Commerce Code or information | 
         
            |  | protected from disclosure by state or federal law. | 
         
            |  | (g)  Except as provided in subsection (e) of this section, a | 
         
            |  | developer that makes a high-risk artificial intelligence system | 
         
            |  | available to a deployer shall make available to the deployer the | 
         
            |  | documentation and information necessary for a deployer to complete | 
         
            |  | an impact assessment pursuant to this section. | 
         
            |  | (h)  A developer that also serves as a deployer for a | 
         
            |  | high-risk artificial intelligence system is not required to | 
         
            |  | generate and store an impact assessment unless the high-risk | 
         
            |  | artificial intelligence system is provided to an unaffiliated | 
         
            |  | deployer. | 
         
            |  | Sec. 551.007.  DISCLOSURE OF A HIGH-RISK ARTIFICIAL | 
         
            |  | INTELLIGENCE SYSTEM TO CONSUMERS.  (a)  A deployer or developer that | 
         
            |  | deploys, offers, sells, leases, licenses, gives, or otherwise makes | 
         
            |  | available a high-risk artificial intelligence system that is | 
         
            |  | intended to interact with consumers shall disclose to each | 
         
            |  | consumer, before or at the time of interaction: | 
         
            |  | (1)  that the consumer is interacting with an | 
         
            |  | artificial intelligence system; | 
         
            |  | (2)  the purpose of the system; | 
         
            |  | (3)  that the system may or will make a consequential | 
         
            |  | decision affecting the consumer; | 
         
            |  | (4)  the nature of any consequential decision in which | 
         
            |  | the system is or may be a substantial factor; | 
         
            |  | (5)  the factors to be used in making any consequential | 
         
            |  | decisions; | 
         
            |  | (6)  contact information of the deployer; | 
         
            |  | (7)  a description of: | 
         
            |  | (A)  any human components of the system; | 
         
            |  | (B)  any automated components of the system; and | 
         
            |  | (C)  how human and automated components are used | 
         
            |  | to inform a consequential decision; and | 
         
            |  | (8)  a declaration of the consumer's rights under | 
         
            |  | Section 551.108. | 
         
            |  | (b)  Disclosure is required under subsection (a) of this | 
         
            |  | section regardless of whether it would be obvious to a reasonable | 
         
            |  | person that the person is interacting with an artificial | 
         
            |  | intelligence system. | 
         
            |  | (c)  All disclosures under subsection (a) shall be clear and | 
         
            |  | conspicuous and written in plain language, and avoid the use of a | 
         
            |  | dark pattern as defined by 541.001, Business & Commerce Code. | 
         
            |  | (d)  All disclosures under subsection (a) may be linked to a | 
         
            |  | separate webpage of the developer or deployer. | 
         
            |  | (e)  Any requirement in this section that may conflict with | 
         
            |  | state or federal law may be exempt. | 
         
            |  | Sec. 551.008.  RISK IDENTIFICATION AND MANAGEMENT POLICY. | 
         
            |  | (a)  A developer or deployer of a high-risk artificial intelligence | 
         
            |  | system shall, prior to deployment, assess potential risks of | 
         
            |  | algorithmic discrimination and implement a risk management policy | 
         
            |  | to govern the development or deployment of the high-risk artificial | 
         
            |  | intelligence system.  The risk management policy shall: | 
         
            |  | (1)  specify and incorporate the principles and | 
         
            |  | processes that the developer or deployer uses to identify, | 
         
            |  | document, and mitigate, in the development or deployment of a | 
         
            |  | high-risk artificial intelligence system: | 
         
            |  | (A)  known or reasonably foreseeable risks of | 
         
            |  | algorithmic discrimination; and | 
         
            |  | (B)  prohibited uses and unacceptable risks under | 
         
            |  | Subchapter B; and | 
         
            |  | (2)  be reasonable in size, scope, and breadth, | 
         
            |  | considering: | 
         
            |  | (A)  guidance and standards set forth in the most | 
         
            |  | recent version of the "Artificial Intelligence Risk Management | 
         
            |  | Framework: Generative Artificial Intelligence Profile" published | 
         
            |  | by the National Institute of Standards and Technology; | 
         
            |  | (B)  any existing risk management guidance, | 
         
            |  | standards or framework applicable to artificial intelligence | 
         
            |  | systems designated by the Banking Commissioner or Insurance | 
         
            |  | Commissioner, if the developer or deployer is regulated by the | 
         
            |  | Department of Banking or Department of Insurance; | 
         
            |  | (C)  the size and complexity of the developer or | 
         
            |  | deployer; | 
         
            |  | (D)  the nature, scope, and intended use of the | 
         
            |  | high-risk artificial intelligence systems developed or deployed; | 
         
            |  | and | 
         
            |  | (E)  the sensitivity and volume of personal data | 
         
            |  | processed in connection with the high-risk artificial intelligence | 
         
            |  | systems. | 
         
            |  | (b)  A risk management policy implemented pursuant to this | 
         
            |  | section may apply to more than one high-risk artificial | 
         
            |  | intelligence system developed or deployed, so long as the developer | 
         
            |  | or deployer complies with all of the forgoing requirements and | 
         
            |  | considerations in adopting and implementing the risk management | 
         
            |  | policy with respect to each high-risk artificial intelligence | 
         
            |  | system covered by the policy. | 
         
            |  | (c)  A developer or deployer may redact or omit any trade | 
         
            |  | secrets as defined by Section 541.001(33), Business & Commerce Code | 
         
            |  | or information protected from disclosure by state or federal law. | 
         
            |  | Sec. 551.009.  RELATIONSHIPS BETWEEN ARTIFICIAL | 
         
            |  | INTELLIGENCE PARTIES.  Any distributor or deployer, shall be | 
         
            |  | considered to be a developer of a high-risk artificial intelligence | 
         
            |  | system for the purposes of this chapter and shall be subject to the | 
         
            |  | obligations and duties of a developer under this chapter in any of | 
         
            |  | the following circumstances: | 
         
            |  | (1)  they put their name or trademark on a high-risk | 
         
            |  | artificial intelligence system already placed in the market or put | 
         
            |  | into service; | 
         
            |  | (2)  they intentionally and substantially modify a | 
         
            |  | high-risk artificial intelligence system that has already been | 
         
            |  | placed in the market or has already been put into service in such a | 
         
            |  | way that it remains a high-risk artificial intelligence system | 
         
            |  | under this chapter; or | 
         
            |  | (3)  they modify the intended purpose of an artificial | 
         
            |  | intelligence system which has not previously been classified as | 
         
            |  | high-risk and has already been placed in the market or put into | 
         
            |  | service in such a way that the artificial intelligence system | 
         
            |  | concerned becomes a high-risk artificial intelligence system in | 
         
            |  | accordance with this chapter of a high-risk artificial intelligence | 
         
            |  | system. | 
         
            |  | Sec. 551.010.  DIGITAL SERVICE PROVIDER AND SOCIAL MEDIA | 
         
            |  | PLATFORM DUTIES REGARDING ARTIFICIAL INTELLIGENCE SYSTEMS.  A | 
         
            |  | digital service provider as defined by Section 509.001(2), Business & | 
         
            |  | Commerce Code or a social media platform as defined by Section | 
         
            |  | 120.001(1), Business & Commerce Code, shall require advertisers on | 
         
            |  | the service or platform to agree to terms preventing the deployment | 
         
            |  | of a high-risk artificial intelligence system on the service or | 
         
            |  | platform that could expose the users of the service or platform to | 
         
            |  | algorithmic discrimination or prohibited uses under Subchapter B. | 
         
            |  | Sec. 551.011.  REPORTING REQUIREMENTS.  (a)  A deployer must | 
         
            |  | notify, in writing, the council, the attorney general, or the | 
         
            |  | director of the appropriate state agency that regulates the | 
         
            |  | deployer's industry, and affected consumers as soon as practicable | 
         
            |  | after the date on which the deployer discovers or is made aware that | 
         
            |  | a deployed high-risk artificial intelligence system has caused | 
         
            |  | algorithmic discrimination of an individual or group of | 
         
            |  | individuals. | 
         
            |  | (b)  If a developer discovers or is made aware that a | 
         
            |  | deployed high-risk artificial intelligence system is using inputs | 
         
            |  | or providing outputs that constitute a violation of Subchapter B, | 
         
            |  | the deployer must cease operation of the offending system as soon as | 
         
            |  | technically feasible and provide notice to the council and the | 
         
            |  | attorney general as soon as practicable and not later than the 10th | 
         
            |  | day after the date on which the developer discovers or is made aware | 
         
            |  | of the unacceptable risk. | 
         
            |  | Sec. 551.012.  SANDBOX PROGRAM EXCEPTION.  (a)  Excluding | 
         
            |  | violations of Subchapter B, this chapter does not apply to the | 
         
            |  | development of an artificial intelligence system that is used | 
         
            |  | exclusively for research, training, testing, or other | 
         
            |  | pre-deployment activities performed by active participants of the | 
         
            |  | sandbox program in compliance with Chapter 552. | 
         
            |  | SUBCHAPTER B.  PROHIBITED USES AND UNACCEPTABLE RISK | 
         
            |  | Sec. 551.051.  MANIPULATION OF HUMAN BEHAVIOR TO CIRCUMVENT | 
         
            |  | INFORMED DECISION-MAKING.  An artificial intelligence system shall | 
         
            |  | not be developed or deployed that uses subliminal techniques beyond | 
         
            |  | a person's consciousness, or purposefully manipulative or | 
         
            |  | deceptive techniques, with the objective or the effect of | 
         
            |  | materially distorting the behavior of a person or a group of persons | 
         
            |  | by appreciably impairing their ability to make an informed | 
         
            |  | decision, thereby causing a person to make a decision that the | 
         
            |  | person would not have otherwise made, in a manner that causes or is | 
         
            |  | likely to cause significant harm to that person or another person or | 
         
            |  | group of persons. | 
         
            |  | Sec. 551.052.  SOCIAL SCORING.  An artificial intelligence | 
         
            |  | system shall not be developed or deployed for the evaluation or | 
         
            |  | classification of natural persons or groups of natural persons | 
         
            |  | based on their social behavior or known, inferred, or predicted | 
         
            |  | personal characteristics with the intent to determine a social | 
         
            |  | score or similar categorical estimation or valuation of a person or | 
         
            |  | groups of persons. | 
         
            |  | Sec. 551.053.  CAPTURE OF BIOMETRIC IDENTIFIERS USING | 
         
            |  | ARTIFICIAL INTELLIGENCE.  An artificial intelligence system | 
         
            |  | developed with biometric identifiers of individuals and the | 
         
            |  | targeted or untargeted gathering of images or other media from the | 
         
            |  | internet or any other publicly available source shall not be | 
         
            |  | deployed for the purpose of uniquely identifying a specific | 
         
            |  | individual. An individual is not considered to be informed nor to | 
         
            |  | have provided consent for such purpose pursuant to Section 503.001, | 
         
            |  | Business and Commerce Code, based solely upon the existence on the | 
         
            |  | internet, or other publicly available source, of an image or other | 
         
            |  | media containing one or more biometric identifiers. | 
         
            |  | Sec. 551.054.  CATEGORIZATION BASED ON SENSITIVE | 
         
            |  | ATTRIBUTES.  An artificial intelligence system shall not be | 
         
            |  | developed or deployed with the specific purpose of inferring or | 
         
            |  | interpreting, sensitive personal attributes of a person or group of | 
         
            |  | persons using biometric identifiers, except for the labeling or | 
         
            |  | filtering of lawfully acquired biometric identifier data. | 
         
            |  | Sec. 551.055.  UTILIZATION OF PERSONAL ATTRIBUTES FOR HARM. | 
         
            |  | An artificial intelligence system shall not utilize | 
         
            |  | characteristics of a person or a specific group of persons based on | 
         
            |  | their race, color, disability, religion, sex, national origin, age, | 
         
            |  | or a specific social or economic situation, with the objective, or | 
         
            |  | the effect, of materially distorting the behavior of that person or | 
         
            |  | a person belonging to that group in a manner that causes or is | 
         
            |  | reasonably likely to cause that person or another person harm. | 
         
            |  | Sec. 551.056.  CERTAIN SEXUALLY EXPLICIT VIDEOS, IMAGES, AND | 
         
            |  | CHILD PORNOGRAPHY.  An artificial intelligence system shall not be | 
         
            |  | developed or deployed that produces, assists, or aids in producing, | 
         
            |  | or is capable of producing unlawful visual material in violation of | 
         
            |  | Section 43.26, Penal Code or an unlawful deep fake video or image in | 
         
            |  | violation of Section 21.165, Penal Code. | 
         
            |  | SUBCHAPTER C.  ENFORCEMENT AND CONSUMER PROTECTIONS | 
         
            |  | Sec. 551.101.  CONSTRUCTION AND APPLICATION.  (a)  This | 
         
            |  | chapter shall be broadly construed and applied to promote its | 
         
            |  | underlying purposes, which are: | 
         
            |  | (1)  to facilitate and advance the responsible | 
         
            |  | development and use of artificial intelligence systems; | 
         
            |  | (2)  to protect individuals and groups of individuals | 
         
            |  | from known, and unknown but reasonably foreseeable, risks, | 
         
            |  | including unlawful algorithmic discrimination; | 
         
            |  | (3)  to provide transparency regarding those risks in | 
         
            |  | the development, deployment, or use of artificial intelligence | 
         
            |  | systems; and | 
         
            |  | (4)  to provide reasonable notice regarding the use or | 
         
            |  | considered use of artificial intelligence systems by state | 
         
            |  | agencies. | 
         
            |  | (b)  this chapter does not apply to the developer of an open | 
         
            |  | source artificial intelligence system, provided that: | 
         
            |  | (1)  the system is not deployed as a high-risk | 
         
            |  | artificial intelligence system and the developer has taken | 
         
            |  | reasonable steps to ensure that the system cannot be used as a | 
         
            |  | high-risk artificial intelligence system without substantial | 
         
            |  | modifications; and | 
         
            |  | (2)  the weights and technical architecture of the | 
         
            |  | system are made publicly available. | 
         
            |  | Sec. 551.102.  ENFORCEMENT AUTHORITY.  The attorney general | 
         
            |  | has authority to enforce this chapter. Excluding violations of | 
         
            |  | Subchapter B, researching, training, testing, or the conducting of | 
         
            |  | other pre-deployment activities by active participants of the | 
         
            |  | sandbox program, in compliance with Chapter 552, does not subject a | 
         
            |  | developer or deployer to penalties or actions. | 
         
            |  | Sec. 551.103.  INTERNET WEBSITE AND COMPLAINT MECHANISM. | 
         
            |  | The attorney general shall post on the attorney general's Internet | 
         
            |  | website: | 
         
            |  | (1)  information relating to: | 
         
            |  | (A)  the responsibilities of a developer, | 
         
            |  | distributor, and deployer under Subchapter A; and | 
         
            |  | (B)  an online mechanism through which a consumer | 
         
            |  | may submit a complaint under this chapter to the attorney general. | 
         
            |  | Sec. 551.104.  INVESTIGATIVE AUTHORITY.  (a)  If the | 
         
            |  | attorney general has reasonable cause to believe that a person has | 
         
            |  | engaged in or is engaging in a violation of this chapter, the | 
         
            |  | attorney general may issue a civil investigative demand.  The | 
         
            |  | attorney general shall issue such demands in accordance with and | 
         
            |  | under the procedures established under Section 15.10. | 
         
            |  | (b)  The attorney general may request, pursuant to a civil | 
         
            |  | investigative demand issued under Subsection (a), that a developer | 
         
            |  | or deployer of a high-risk artificial intelligence system disclose | 
         
            |  | their risk management policy and impact assessments required under | 
         
            |  | Subchapter A.  The attorney general may evaluate the risk | 
         
            |  | management policy and impact assessments for compliance with the | 
         
            |  | requirements set forth in Subchapter A. | 
         
            |  | (c)  The attorney general may not institute an action for a | 
         
            |  | civil penalty against a developer or deployer for artificial | 
         
            |  | intelligence systems that remain isolated from customer | 
         
            |  | interaction in a pre-deployment environment. | 
         
            |  | Sec. 551.105.  NOTICE OF VIOLATION OF CHAPTER; OPPORTUNITY | 
         
            |  | TO CURE.  Before bringing an action under Section 551.106, the | 
         
            |  | attorney general shall notify a developer, distributor, or deployer | 
         
            |  | in writing, not later than the 30th day before bringing the action, | 
         
            |  | identifying the specific provisions of this chapter the attorney | 
         
            |  | general alleges have been or are being violated.  The attorney | 
         
            |  | general may not bring an action against the developer or deployer | 
         
            |  | if: | 
         
            |  | (1)  within the 30-day period, the developer or | 
         
            |  | deployer cures the identified violation; and | 
         
            |  | (2)  the developer or deployer provides the attorney | 
         
            |  | general a written statement that the developer or deployer: | 
         
            |  | (A)  cured the alleged violation; | 
         
            |  | (B)  notified the consumer, if technically | 
         
            |  | feasible, and the council that the developer or deployer's | 
         
            |  | violation was addressed, if the consumer's contact information has | 
         
            |  | been made available to the developer or deployer and the attorney | 
         
            |  | general; | 
         
            |  | (C)  provided supportive documentation to show | 
         
            |  | how the violation was cured; and | 
         
            |  | (D)  made changes to internal policies, if | 
         
            |  | necessary, to reasonably ensure that no such further violations are | 
         
            |  | likely to occur. | 
         
            |  | Sec. 551.106.  CIVIL PENALTY; INJUNCTION.  (a)  The attorney | 
         
            |  | general may bring an action in the name of this state to restrain or | 
         
            |  | enjoin the person from violating this chapter and seek injunctive | 
         
            |  | relief. | 
         
            |  | (b)  The attorney general may recover reasonable attorney's | 
         
            |  | fees and other reasonable expenses incurred in investigating and | 
         
            |  | bringing an action under this section. | 
         
            |  | (c)  The attorney general may assess and collect an | 
         
            |  | administrative fine against a developer or deployer who fails to | 
         
            |  | timely cure a violation or who breaches a written statement | 
         
            |  | provided to the attorney general, other than those for a prohibited | 
         
            |  | use, of not less than $50,000 and not more than $100,000 per uncured | 
         
            |  | violation. | 
         
            |  | (d)  The attorney general may assess and collect an | 
         
            |  | administrative fine against a developer or deployer who fails to | 
         
            |  | timely cure a violation of a prohibited use, or whose violation is | 
         
            |  | determined to be uncurable, of not less than $80,000 and not more | 
         
            |  | than $200,000 per violation. | 
         
            |  | (e)  A developer or deployer who was found in violation of | 
         
            |  | and continues to operate with the provisions of this chapter shall | 
         
            |  | be assessed an administrative fine of not less than $2,000 and not | 
         
            |  | more than $40,000 per day. | 
         
            |  | (f)  There is a rebuttable presumption that a developer, | 
         
            |  | distributor, or deployer used reasonable care as required under | 
         
            |  | this chapter if the developer, distributor, or deployer complied | 
         
            |  | with their duties under Subchapter A. | 
         
            |  | Sec. 551.107.  ENFORCEMENT ACTIONS BY STATE AGENCIES.  A | 
         
            |  | state agency may sanction an individual licensed, registered, or | 
         
            |  | certified by that agency for violations of Subchapter B, including: | 
         
            |  | (1)  the suspension, probation, or revocation of a | 
         
            |  | license, registration, certificate, or other form of permission to | 
         
            |  | engage in an activity; and | 
         
            |  | (2)  monetary penalties up to $100,000. | 
         
            |  | Sec. 551.108.  CONSUMER RIGHTS AND REMEDIES.  A consumer may | 
         
            |  | appeal a consequential decision made by a high-risk artificial | 
         
            |  | intelligence system which has an adverse impact on their health, | 
         
            |  | safety, or fundamental rights, and shall have the right to obtain | 
         
            |  | from the deployer clear and meaningful explanations of the role of | 
         
            |  | the high-risk artificial intelligence system in the | 
         
            |  | decision-making procedure and the main elements of the decision | 
         
            |  | taken. | 
         
            |  | SUBCHAPTER D.  CONSTRUCTION OF CHAPTER; LOCAL PREEMPTION | 
         
            |  | Sec. 551.151.  CONSTRUCTION OF CHAPTER.  This chapter may | 
         
            |  | not be construed as imposing a requirement on a developer, a | 
         
            |  | deployer, or other person that adversely affects the rights or | 
         
            |  | freedoms of any person, including the right of free speech. | 
         
            |  | Sec. 551.152.  LOCAL PREEMPTION.  This chapter supersedes | 
         
            |  | and preempts any ordinance, resolution, rule, or other regulation | 
         
            |  | adopted by a political subdivision regarding the use of high-risk | 
         
            |  | artificial intelligence systems. | 
         
            |  | CHAPTER 552. ARTIFICIAL INTELLIGENCE REGULATORY SANDBOX PROGRAM | 
         
            |  | SUBCHAPTER A.  GENERAL PROVISIONS | 
         
            |  | Sec. 552.001.  DEFINITIONS.  In this chapter: | 
         
            |  | (1)  "Applicable agency" means a state agency | 
         
            |  | responsible for regulating a specific sector impacted by an | 
         
            |  | artificial intelligence system. | 
         
            |  | (2)  "Consumer" means a person who engages in | 
         
            |  | transactions involving an artificial intelligence system or is | 
         
            |  | directly affected by the use of such a system. | 
         
            |  | (3)  "Council" means the Artificial Intelligence | 
         
            |  | Council established by Chapter 553. | 
         
            |  | (4)  "Department" means the Texas Department of | 
         
            |  | Information Resources. | 
         
            |  | (5)  "Program participant" means a person or business | 
         
            |  | entity approved to participate in the sandbox program. | 
         
            |  | (6)  "Sandbox program" means the regulatory framework | 
         
            |  | established under this chapter that allows temporary testing of | 
         
            |  | artificial intelligence systems in a controlled, limited manner | 
         
            |  | without full regulatory compliance. | 
         
            |  | SUBCHAPTER B.  SANDBOX PROGRAM FRAMEWORK | 
         
            |  | Sec. 552.051.  ESTABLISHMENT OF SANDBOX PROGRAM.  (a)  The | 
         
            |  | department, in coordination with the council, shall administer the | 
         
            |  | Artificial Intelligence Regulatory Sandbox Program to facilitate | 
         
            |  | the development, testing, and deployment of innovative artificial | 
         
            |  | intelligence systems in Texas. | 
         
            |  | (b)  The sandbox program is designed to: | 
         
            |  | (1)  promote the safe and innovative use of artificial | 
         
            |  | intelligence across various sectors including healthcare, finance, | 
         
            |  | education, and public services; | 
         
            |  | (2)  encourage the responsible deployment of | 
         
            |  | artificial intelligence systems while balancing the need for | 
         
            |  | consumer protection, privacy, and public safety; and | 
         
            |  | (3)  provide clear guidelines for artificial | 
         
            |  | intelligence developers to test systems while temporarily exempt | 
         
            |  | from certain regulatory requirements. | 
         
            |  | Sec. 552.052.  APPLICATION PROCESS.  (a)  A person or | 
         
            |  | business entity seeking to participate in the sandbox program must | 
         
            |  | submit an application to the council. | 
         
            |  | (b)  The application must include: | 
         
            |  | (1)  a detailed description of the artificial | 
         
            |  | intelligence system and its intended use; | 
         
            |  | (2)  a risk assessment that addresses potential impacts | 
         
            |  | on consumers, privacy, and public safety; | 
         
            |  | (3)  a plan for mitigating any adverse consequences | 
         
            |  | during the testing phase; and | 
         
            |  | (4)  proof of compliance with federal artificial | 
         
            |  | intelligence laws and regulations, where applicable. | 
         
            |  | Sec. 552.053.  DURATION AND SCOPE OF PARTICIPATION.  A | 
         
            |  | participant may test an artificial intelligence system under the | 
         
            |  | sandbox program for a period of up to 36 months, unless extended by | 
         
            |  | the department for good cause. | 
         
            |  | SUBCHAPTER C.  OVERSIGHT AND COMPLIANCE | 
         
            |  | Sec. 552.101.  AGENCY COORDINATION.  (a)  The department | 
         
            |  | shall coordinate with all relevant state regulatory agencies to | 
         
            |  | oversee the operations of the sandbox participants. | 
         
            |  | (b)  A relevant agency may recommend to the department that a | 
         
            |  | participant's sandbox privileges be revoked if the artificial | 
         
            |  | intelligence system: | 
         
            |  | (1)  poses undue risk to public safety or welfare; | 
         
            |  | (2)  violates any federal or state laws that the | 
         
            |  | sandbox program cannot override. | 
         
            |  | Sec. 552.102.  REPORTING REQUIREMENTS.  (a)  Each sandbox | 
         
            |  | participant must submit quarterly reports to the department, which | 
         
            |  | shall include: | 
         
            |  | (1)  system performance metrics; | 
         
            |  | (2)  updates on how the system mitigates any risks | 
         
            |  | associated with its operation; and | 
         
            |  | (3)  feedback from consumers and affected stakeholders | 
         
            |  | that are using a product that has been deployed from this section. | 
         
            |  | (b)  The department must submit an annual report to the | 
         
            |  | legislature detailing: | 
         
            |  | (1)  the number of participants in the sandbox program; | 
         
            |  | (2)  the overall performance and impact of artificial | 
         
            |  | intelligence systems tested within the program; and | 
         
            |  | (3)  recommendations for future legislative or | 
         
            |  | regulatory reforms. | 
         
            |  | CHAPTER 553.  TEXAS ARTIFICIAL INTELLIGENCE COUNCIL | 
         
            |  | SUBCHAPTER A.  CREATION AND ORGANIZATION OF COUNCIL | 
         
            |  | Sec. 553.001.  CREATION OF COUNCIL.  (a)  The Artificial | 
         
            |  | Intelligence Council is administratively attached to the office of | 
         
            |  | the governor, and the office of the governor shall provide | 
         
            |  | administrative support to the council as provided by this section. | 
         
            |  | (b)  The office of the governor and the council shall enter | 
         
            |  | into a memorandum of understanding detailing: | 
         
            |  | (1)  the administrative support the council requires | 
         
            |  | from the office of the governor to fulfill the purposes of this | 
         
            |  | chapter; | 
         
            |  | (2)  the reimbursement of administrative expenses to | 
         
            |  | the office of the governor; and | 
         
            |  | (3)  any other provisions available by law to ensure | 
         
            |  | the efficient operation of the council as attached to the office of | 
         
            |  | the governor. | 
         
            |  | (c)  The purpose of the council is to: | 
         
            |  | (1)  ensure artificial intelligence systems are | 
         
            |  | ethical and in the public's best interest and do not harm public | 
         
            |  | safety or undermine individual freedoms by finding gaps in the | 
         
            |  | Penal Code and Chapter 82, Civil Practice and Remedies Code and | 
         
            |  | making recommendations to the Legislature. | 
         
            |  | (2)  identify existing laws and regulations that impede | 
         
            |  | innovation in artificial intelligence development and recommend | 
         
            |  | appropriate reforms; | 
         
            |  | (3)  analyze opportunities to improve the efficiency | 
         
            |  | and effectiveness of state government operations through the use of | 
         
            |  | artificial intelligence systems; | 
         
            |  | (4)  investigate and evaluate potential instances of | 
         
            |  | regulatory capture, including undue influence by technology | 
         
            |  | companies or disproportionate burdens on smaller innovators; | 
         
            |  | (5)  investigate and evaluate the influence of | 
         
            |  | technology companies on other companies and determine the existence | 
         
            |  | or use of tools or processes designed to censor competitors or | 
         
            |  | users; and | 
         
            |  | (6)  offer guidance and recommendations to state | 
         
            |  | agencies including advisory opinions on the ethical and legal use | 
         
            |  | of artificial intelligence; | 
         
            |  | Sec. 553.002.  COUNCIL MEMBERSHIP.  (a)  The council is | 
         
            |  | composed of 10 members as follows: | 
         
            |  | (1)  four members of the public appointed by the | 
         
            |  | governor; | 
         
            |  | (2)  two members of the public appointed by the | 
         
            |  | lieutenant governor; | 
         
            |  | (3)  two members of the public appointed by the speaker | 
         
            |  | of the house of representatives; | 
         
            |  | (4)  one senator appointed by the lieutenant governor | 
         
            |  | as a nonvoting member; and | 
         
            |  | (5)  one member of the house of representatives | 
         
            |  | appointed by the speaker of the house of representatives as a | 
         
            |  | nonvoting member. | 
         
            |  | (b)  Voting members of the council serve staggered four-year | 
         
            |  | terms, with the terms of four members expiring every two years. | 
         
            |  | (c)  The governor shall appoint a chair from among the | 
         
            |  | members, and the council shall elect a vice chair from its | 
         
            |  | membership. | 
         
            |  | (d)  The council may establish an advisory board composed of | 
         
            |  | individuals from the public who possess expertise directly related | 
         
            |  | to the council's functions, including technical, ethical, | 
         
            |  | regulatory, and other relevant areas. | 
         
            |  | Sec. 553.003.  QUALIFICATIONS.  (a)  Members of the council | 
         
            |  | must be Texas residents and have knowledge or expertise in one or | 
         
            |  | more of the following areas: | 
         
            |  | (1)  artificial intelligence technologies; | 
         
            |  | (2)  data privacy and security; | 
         
            |  | (3)  ethics in technology or law; | 
         
            |  | (4)  public policy and regulation; or | 
         
            |  | (5)  risk management or safety related to artificial | 
         
            |  | intelligence systems. | 
         
            |  | (b)  Members must not hold an office or profit under the | 
         
            |  | state or federal government at the time of appointment. | 
         
            |  | Sec. 553.004.  STAFF AND ADMINISTRATION.  The council may | 
         
            |  | employ an executive director and other personnel as necessary to | 
         
            |  | perform its duties. | 
         
            |  | SUBCHAPTER B.  POWERS AND DUTIES OF THE COUNCIL | 
         
            |  | Sec. 553.101.  ISSUANCE OF ADVISORY OPINIONS.  (a)  A state | 
         
            |  | agency may request a written advisory opinion from the council | 
         
            |  | regarding the use of artificial intelligence systems in the state. | 
         
            |  | (b)  The council may issue advisory opinions on state use of | 
         
            |  | artificial intelligence systems regarding: | 
         
            |  | (1)  the compliance of artificial intelligence systems | 
         
            |  | with Texas law; | 
         
            |  | (2)  the ethical implications of artificial | 
         
            |  | intelligence deployments in the state; | 
         
            |  | (3)  data privacy and security concerns related to | 
         
            |  | artificial intelligence systems; or | 
         
            |  | (4)  potential liability or legal risks associated with | 
         
            |  | the use of AI. | 
         
            |  | Sec. 553.102.  RULEMAKING AUTHORITY.  (a)  The council may | 
         
            |  | adopt rules necessary to administer its duties under this chapter, | 
         
            |  | including: | 
         
            |  | (1)  procedures for requesting advisory opinions; | 
         
            |  | (2)  standards for ethical artificial intelligence | 
         
            |  | development and deployment; | 
         
            |  | (3)  guidelines for evaluating the safety, privacy, and | 
         
            |  | fairness of artificial intelligence systems. | 
         
            |  | (b)  The council's rules shall align with state laws on | 
         
            |  | artificial intelligence, technology, data security, and consumer | 
         
            |  | protection. | 
         
            |  | Sec. 553.103.  TRAINING AND EDUCATIONAL OUTREACH.  The | 
         
            |  | council shall conduct training programs for state agencies and | 
         
            |  | local governments on the ethical use of artificial intelligence | 
         
            |  | systems. | 
         
            |  | SECTION 3.  Section 503.001, Business & Commerce Code is | 
         
            |  | amended by adding Subsection (c-3) to read as follows: | 
         
            |  | (c-3)  This section does not apply to the training, | 
         
            |  | processing, or storage of biometric identifiers involved in machine | 
         
            |  | learning or artificial intelligence systems, unless performed for | 
         
            |  | the purpose of uniquely identifying a specific individual.  If a | 
         
            |  | biometric identifier captured for the purpose of training an | 
         
            |  | artificial intelligence system is subsequently used for a | 
         
            |  | commercial purpose, the person possessing the biometric identifier | 
         
            |  | is subject to this section's provisions for the possession and | 
         
            |  | destruction of a biometric identifier and the associated penalties. | 
         
            |  | SECTION 4.  Sections 541.051(b), 541.101(a), 541.102(a), | 
         
            |  | and Sec.541.104(a), Business & Commerce Code, are amended to read | 
         
            |  | as follows: | 
         
            |  | Sec. 541.051.  CONSUMER'S PERSONAL DATA RIGHTS; REQUEST TO | 
         
            |  | EXERCISE RIGHTS.  (a)  A consumer is entitled to exercise the | 
         
            |  | consumer rights authorized by this section at any time by | 
         
            |  | submitting a request to a controller specifying the consumer rights | 
         
            |  | the consumer wishes to exercise.  With respect to the processing of | 
         
            |  | personal data belonging to a known child, a parent or legal guardian | 
         
            |  | of the child may exercise the consumer rights on behalf of the | 
         
            |  | child. | 
         
            |  | (b)  A controller shall comply with an authenticated | 
         
            |  | consumer request to exercise the right to: | 
         
            |  | (1)  confirm whether a controller is processing the | 
         
            |  | consumer's personal data and to access the personal data; | 
         
            |  | (2)  correct inaccuracies in the consumer's personal | 
         
            |  | data, taking into account the nature of the personal data and the | 
         
            |  | purposes of the processing of the consumer's personal data; | 
         
            |  | (3)  delete personal data provided by or obtained about | 
         
            |  | the consumer; | 
         
            |  | (4)  if the data is available in a digital format, | 
         
            |  | obtain a copy of the consumer's personal data that the consumer | 
         
            |  | previously provided to the controller in a portable and, to the | 
         
            |  | extent technically feasible, readily usable format that allows the | 
         
            |  | consumer to transmit the data to another controller without | 
         
            |  | hindrance; [ or] | 
         
            |  | (5)  know if the consumer's personal data is or will be | 
         
            |  | used in any artificial intelligence system and for what purposes; | 
         
            |  | or | 
         
            |  | ([ 5]6)  opt out of the processing of the personal data | 
         
            |  | for purposes of: | 
         
            |  | (A)  targeted advertising; | 
         
            |  | (B)  the sale of personal data; [ or] | 
         
            |  | (C)  the sale of personal data for use in | 
         
            |  | artificial intelligence systems prior to being collected; or | 
         
            |  | ([ C]D)  profiling in furtherance of a decision | 
         
            |  | that produces a legal or similarly significant effect concerning | 
         
            |  | the consumer. | 
         
            |  | Sec. 541.101.  CONTROLLER DUTIES; TRANSPARENCY.  (a)  A | 
         
            |  | controller: | 
         
            |  | (1)  shall limit the collection of personal data to | 
         
            |  | what is adequate, relevant, and reasonably necessary in relation to | 
         
            |  | the purposes for which that personal data is processed, as | 
         
            |  | disclosed to the consumer; [ and] | 
         
            |  | (2)  for purposes of protecting the confidentiality, | 
         
            |  | integrity, and accessibility of personal data, shall establish, | 
         
            |  | implement, and maintain reasonable administrative, technical, and | 
         
            |  | physical data security practices that are appropriate to the volume | 
         
            |  | and nature of the personal data at issue .; and | 
         
            |  | (3)  for purposes of protecting the unauthorized | 
         
            |  | access, disclosure, alteration, or destruction of data collected, | 
         
            |  | stored, and processed by artificial intelligence systems, shall | 
         
            |  | establish, implement, and maintain, reasonable administrative, | 
         
            |  | technical, and physical data security practices that are | 
         
            |  | appropriate to the volume and nature of the data collected, stored, | 
         
            |  | and processed by artificial intelligence systems. | 
         
            |  | Sec.541.102.  PRIVACY NOTICE.  (a)  A controller shall | 
         
            |  | provide consumers with a reasonably accessible and clear privacy | 
         
            |  | notice that includes: | 
         
            |  | (1)  the categories of personal data processed by the | 
         
            |  | controller, including, if applicable, any sensitive data processed | 
         
            |  | by the controller; | 
         
            |  | (2)  the purpose for processing personal data; | 
         
            |  | (3)  how consumers may exercise their consumer rights | 
         
            |  | under Subchapter B, including the process by which a consumer may | 
         
            |  | appeal a controller's decision with regard to the consumer's | 
         
            |  | request; | 
         
            |  | (4)  if applicable, the categories of personal data | 
         
            |  | that the controller shares with third parties; | 
         
            |  | (5)  if applicable, the categories of third parties | 
         
            |  | with whom the controller shares personal data; [ and] | 
         
            |  | (6)  if applicable, an acknowledgement of the | 
         
            |  | collection, use, and sharing of personal data for artificial | 
         
            |  | intelligence purposes; and | 
         
            |  | ([ 6]7)  a description of the methods required under | 
         
            |  | Section 541.055 through which consumers can submit requests to | 
         
            |  | exercise their consumer rights under this chapter. | 
         
            |  | Sec. 541.104.  DUTIES OF PROCESSOR.  (a)  A processor shall | 
         
            |  | adhere to the instructions of a controller and shall assist the | 
         
            |  | controller in meeting or complying with the controller's duties or | 
         
            |  | requirements under this chapter, including: | 
         
            |  | (1)  assisting the controller in responding to consumer | 
         
            |  | rights requests submitted under Section 541.051 by using | 
         
            |  | appropriate technical and organizational measures, as reasonably | 
         
            |  | practicable, taking into account the nature of processing and the | 
         
            |  | information available to the processor; | 
         
            |  | (2)  assisting the controller with regard to complying | 
         
            |  | with the [ requirement]requirements relating to the security of | 
         
            |  | processing personal data, and if applicable, the data collected, | 
         
            |  | stored, and processed by artificial intelligence systems and to the | 
         
            |  | notification of a breach of security of the processor's system | 
         
            |  | under Chapter 521, taking into account the nature of processing and | 
         
            |  | the information available to the processor; and | 
         
            |  | (3)  providing necessary information to enable the | 
         
            |  | controller to conduct and document data protection assessments | 
         
            |  | under Section 541.105. | 
         
            |  | SECTION 5.  Subtitle E, Title 4, Labor Code, is amended by | 
         
            |  | adding Chapter 319 to read as follows: | 
         
            |  | CHAPTER 319.  TEXAS ARTIFICIAL INTELLIGENCE WORKFORCE DEVELOPMENT | 
         
            |  | GRANT PROGRAM | 
         
            |  | SUBCHAPTER A.  GENERAL PROVISIONS | 
         
            |  | Sec. 319.001.  DEFINITIONS.  In this chapter: | 
         
            |  | (1)  "Artificial intelligence industry" means | 
         
            |  | businesses, research organizations, governmental entities, and | 
         
            |  | educational institutions engaged in the development, deployment, | 
         
            |  | or use of artificial intelligence technologies in Texas. | 
         
            |  | (2)  "Commission" means the Texas Workforce | 
         
            |  | Commission. | 
         
            |  | (3)  "Eligible entity" means Texas-based businesses in | 
         
            |  | the artificial intelligence industry, public school districts, | 
         
            |  | community colleges, public technical institutes, and workforce | 
         
            |  | development organizations. | 
         
            |  | (4)  "Program" means the Texas Artificial Intelligence | 
         
            |  | Workforce Development Grant Program established under this | 
         
            |  | chapter. | 
         
            |  | SUBCHAPTER B.  ARTIFICIAL INTELLIGENCE WORKFORCE DEVELOPMENT GRANT | 
         
            |  | PROGRAM | 
         
            |  | Sec. 319.051.  ESTABLISHMENT OF GRANT PROGRAM.  (a)  The | 
         
            |  | commission shall establish the Texas Artificial Intelligence | 
         
            |  | Workforce Development Grant Program to: | 
         
            |  | (1)  support and assist Texas-based artificial | 
         
            |  | intelligence companies in developing a skilled workforce; | 
         
            |  | (2)  provide grants to local community colleges and | 
         
            |  | public high schools to implement or expand career and technical | 
         
            |  | education programs focused on artificial intelligence readiness | 
         
            |  | and skill development; and | 
         
            |  | (3)  offer opportunities to retrain and reskill workers | 
         
            |  | through partnerships with the artificial intelligence industry and | 
         
            |  | workforce development programs. | 
         
            |  | (b)  The program is intended to: | 
         
            |  | (1)  prepare Texas workers and students for employment | 
         
            |  | in the rapidly growing artificial intelligence industry; | 
         
            |  | (2)  support the creation of postsecondary programs and | 
         
            |  | certifications relevant to current artificial intelligence | 
         
            |  | opportunities; | 
         
            |  | (3)  ensure that Texas maintains a competitive edge in | 
         
            |  | artificial intelligence innovation and workforce development; and | 
         
            |  | (4)  address workforce gaps in artificial | 
         
            |  | intelligence-related fields, including data science, | 
         
            |  | cybersecurity, machine learning, robotics, and automation. | 
         
            |  | (c)  The commission shall adopt rules necessary to implement | 
         
            |  | this subchapter. | 
         
            |  | Sec. 319.052.  FEDERAL FUNDS AND GIFTS, GRANTS, AND | 
         
            |  | DONATIONS. | 
         
            |  | In addition to other money appropriated by the legislature, | 
         
            |  | for the purpose of providing artificial intelligence workforce | 
         
            |  | opportunities under the program established under this subchapter | 
         
            |  | the commission may: | 
         
            |  | (1)  seek and apply for any available federal funds; | 
         
            |  | and | 
         
            |  | (2)  solicit and accept gifts, grants, and donations | 
         
            |  | from any other source, public or private, as necessary to ensure | 
         
            |  | effective implementation of the program. | 
         
            |  | Sec. 319.053.  ELIGIBILITY FOR GRANTS.  (a)  The following | 
         
            |  | entities are eligible to apply for grants under this program: | 
         
            |  | (1)  Texas-based businesses engaged in the development | 
         
            |  | or deployment of artificial intelligence technologies; | 
         
            |  | (2)  public school districts and charter schools | 
         
            |  | offering or seeking to offer career and technical education | 
         
            |  | programs in artificial intelligence-related fields or to update | 
         
            |  | existing curricula to address these fields; | 
         
            |  | (3)  public community colleges and technical | 
         
            |  | institutes that develop artificial intelligence-related curricula | 
         
            |  | or training programs or update existing curricula or training | 
         
            |  | programs to incorporate artificial intelligence training; and | 
         
            |  | (4)  workforce development organizations in | 
         
            |  | partnership with artificial intelligence companies to reskill and | 
         
            |  | retrain workers in artificial intelligence competencies. | 
         
            |  | (b)  To be eligible, the entity must: | 
         
            |  | (1)  submit an application to the commission in the | 
         
            |  | form and manner prescribed by the commission; and | 
         
            |  | (2)  demonstrate the capacity to develop and implement | 
         
            |  | training, educational, or workforce development programs that | 
         
            |  | align with the needs of the artificial intelligence industry in | 
         
            |  | Texas and lead to knowledge, skills, and work-based experiences | 
         
            |  | that are transferable to similar employment opportunities in the | 
         
            |  | artificial intelligence industry. | 
         
            |  | Sec. 319.054.  USE OF GRANTS.  (a)  Grants awarded under the | 
         
            |  | program may be used for: | 
         
            |  | (1)  developing or expanding workforce training | 
         
            |  | programs for artificial intelligence-related skills, including but | 
         
            |  | not limited to machine learning, data analysis, software | 
         
            |  | development, and robotics; | 
         
            |  | (2)  creating or enhancing career and technical | 
         
            |  | education programs in artificial intelligence for high school | 
         
            |  | students, with a focus on preparing them for careers in artificial | 
         
            |  | intelligence or related fields; | 
         
            |  | (3)  providing financial support for instructors, | 
         
            |  | equipment, and technology necessary for artificial | 
         
            |  | intelligence-related workforce training; | 
         
            |  | (4)  partnering with local businesses to develop | 
         
            |  | internship programs, on-the-job training opportunities, instructor | 
         
            |  | externships, and apprenticeships in the artificial intelligence | 
         
            |  | industry; | 
         
            |  | (5)  funding scholarships or stipends for students, | 
         
            |  | instructors, and workers participating in artificial intelligence | 
         
            |  | training programs, particularly for individuals from underserved | 
         
            |  | or underrepresented communities; or | 
         
            |  | (6)  reskilling and retraining workers displaced by | 
         
            |  | technological changes or job automation, with an emphasis on | 
         
            |  | artificial intelligence-related job roles. | 
         
            |  | (b)  The commission shall prioritize funding for: | 
         
            |  | (1)  initiatives that partner with rural and | 
         
            |  | underserved communities to promote artificial intelligence | 
         
            |  | education and career pathways; | 
         
            |  | (2)  programs that lead to credentials of value in | 
         
            |  | artificial intelligence or related fields; and | 
         
            |  | (3)  proposals that include partnerships between the | 
         
            |  | artificial intelligence industry, a public or private institution | 
         
            |  | of higher education in this state, and workforce development | 
         
            |  | organizations. | 
         
            |  | SECTION 6.  Section 325.011, Government Code, is amended to | 
         
            |  | read as follows: | 
         
            |  | Sec. 325.011.  CRITERIA FOR REVIEW.  The commission and its | 
         
            |  | staff shall consider the following criteria in determining whether | 
         
            |  | a public need exists for the continuation of a state agency or its | 
         
            |  | advisory committees or for the performance of the functions of the | 
         
            |  | agency or its advisory committees: | 
         
            |  | (1)  the efficiency and effectiveness with which the | 
         
            |  | agency or the advisory committee operates; | 
         
            |  | (2)(A)  an identification of the mission, goals, and | 
         
            |  | objectives intended for the agency or advisory committee and of the | 
         
            |  | problem or need that the agency or advisory committee was intended | 
         
            |  | to address; and | 
         
            |  | (B)  the extent to which the mission, goals, and | 
         
            |  | objectives have been achieved and the problem or need has been | 
         
            |  | addressed; | 
         
            |  | (3)(A)  an identification of any activities of the | 
         
            |  | agency in addition to those granted by statute and of the authority | 
         
            |  | for those activities; and | 
         
            |  | (B)  the extent to which those activities are | 
         
            |  | needed; | 
         
            |  | (4)  an assessment of authority of the agency relating | 
         
            |  | to fees, inspections, enforcement, and penalties; | 
         
            |  | (5)  whether less restrictive or alternative methods of | 
         
            |  | performing any function that the agency performs could adequately | 
         
            |  | protect or provide service to the public; | 
         
            |  | (6)  the extent to which the jurisdiction of the agency | 
         
            |  | and the programs administered by the agency overlap or duplicate | 
         
            |  | those of other agencies, the extent to which the agency coordinates | 
         
            |  | with those agencies, and the extent to which the programs | 
         
            |  | administered by the agency can be consolidated with the programs of | 
         
            |  | other state agencies; | 
         
            |  | (7)  the promptness and effectiveness with which the | 
         
            |  | agency addresses complaints concerning entities or other persons | 
         
            |  | affected by the agency, including an assessment of the agency's | 
         
            |  | administrative hearings process; | 
         
            |  | (8)  an assessment of the agency's rulemaking process | 
         
            |  | and the extent to which the agency has encouraged participation by | 
         
            |  | the public in making its rules and decisions and the extent to which | 
         
            |  | the public participation has resulted in rules that benefit the | 
         
            |  | public; | 
         
            |  | (9)  the extent to which the agency has complied with: | 
         
            |  | (A)  federal and state laws and applicable rules | 
         
            |  | regarding equality of employment opportunity and the rights and | 
         
            |  | privacy of individuals; and | 
         
            |  | (B)  state law and applicable rules of any state | 
         
            |  | agency regarding purchasing guidelines and programs for | 
         
            |  | historically underutilized businesses; | 
         
            |  | (10)  the extent to which the agency issues and | 
         
            |  | enforces rules relating to potential conflicts of interest of its | 
         
            |  | employees; | 
         
            |  | (11)  the extent to which the agency complies with | 
         
            |  | Chapters 551 and 552 and follows records management practices that | 
         
            |  | enable the agency to respond efficiently to requests for public | 
         
            |  | information; | 
         
            |  | (12)  the effect of federal intervention or loss of | 
         
            |  | federal funds if the agency is abolished; | 
         
            |  | (13)  the extent to which the purpose and effectiveness | 
         
            |  | of reporting requirements imposed on the agency justifies the | 
         
            |  | continuation of the requirement; [ and] | 
         
            |  | (14)  an assessment of the agency's cybersecurity | 
         
            |  | practices using confidential information available from the | 
         
            |  | Department of Information Resources or any other appropriate state | 
         
            |  | agency; and | 
         
            |  | (15)  an assessment, using information available from | 
         
            |  | the Department of Information Resources, the Attorney General, or | 
         
            |  | any other appropriate state agency, of the agency's use of | 
         
            |  | artificial intelligence systems, high-risk artificial intelligence | 
         
            |  | systems, in its operations and its oversight of the use of | 
         
            |  | artificial intelligence systems by entities or persons under the | 
         
            |  | agency's jurisdiction, and any related impact on the agency's | 
         
            |  | ability to achieve its mission, goals, and objectives. | 
         
            |  | SECTION 7.  Section 2054.068(b), Government Code, is amended | 
         
            |  | to read as follows: | 
         
            |  | (b)  The department shall collect from each state agency | 
         
            |  | information on the status and condition of the agency's information | 
         
            |  | technology infrastructure, including information regarding: | 
         
            |  | (1)  the agency's information security program; | 
         
            |  | (2)  an inventory of the agency's servers, mainframes, | 
         
            |  | cloud services, and other information technology equipment; | 
         
            |  | (3)  identification of vendors that operate and manage | 
         
            |  | the agency's information technology infrastructure; [ and] | 
         
            |  | (4)  any additional related information requested by | 
         
            |  | the department; and | 
         
            |  | (5)  an evaluation of the use, or considered use, of | 
         
            |  | artificial intelligence systems and high-risk artificial | 
         
            |  | intelligence systems by each state agency. | 
         
            |  | SECTION 8.  Section 2054.0965(b), Government Code, is | 
         
            |  | amended to read as follows: | 
         
            |  | Sec. 2054.0965.  INFORMATION RESOURCES DEPLOYMENT REVIEW. | 
         
            |  | (b)  Except as otherwise modified by rules adopted by the | 
         
            |  | department, the review must include: | 
         
            |  | (1)  an inventory of the agency's major information | 
         
            |  | systems, as defined by Section 2054.008, and other operational or | 
         
            |  | logistical components related to deployment of information | 
         
            |  | resources as prescribed by the department; | 
         
            |  | (2)  an inventory of the agency's major databases, | 
         
            |  | artificial intelligence systems, and applications; | 
         
            |  | (3)  a description of the agency's existing and planned | 
         
            |  | telecommunications network configuration; | 
         
            |  | (4)  an analysis of how information systems, | 
         
            |  | components, databases, applications, and other information | 
         
            |  | resources have been deployed by the agency in support of: | 
         
            |  | (A)  applicable achievement goals established | 
         
            |  | under Section 2056.006 and the state strategic plan adopted under | 
         
            |  | Section 2056.009; | 
         
            |  | (B)  the state strategic plan for information | 
         
            |  | resources; and | 
         
            |  | (C)  the agency's business objectives, mission, | 
         
            |  | and goals; | 
         
            |  | (5)  agency information necessary to support the state | 
         
            |  | goals for interoperability and reuse; and | 
         
            |  | (6)  confirmation by the agency of compliance with | 
         
            |  | state statutes, rules, and standards relating to information | 
         
            |  | resources. | 
         
            |  | SECTION 9.  Not later than September 1, 2025, the attorney | 
         
            |  | general shall post on the attorney general's Internet website the | 
         
            |  | information and online mechanism required by Section 551.041, | 
         
            |  | Business & Commerce Code, as added by this Act. | 
         
            |  | SECTION 10.  This Act takes effect September 1, 2025. |