Archivos de la etiqueta: Legislación EE.UU.

12Nov/24

American Data Privacy and Protection Act. (ADPPA)

American Data Privacy and Protection Act. (ADPPA). House of Representatives 8152, 117th Congress 2021-2022). June 21, 2022. Reported in House 12/30/2022. Union Calendar nº 488

Union Calendar No. 488

117th CONGRESS

2d Session

H. R. 8152

[Report No. 117–669]

To provide consumers with foundational data privacy rights, create strong oversight mechanisms, and establish meaningful enforcement.

IN THE HOUSE OF REPRESENTATIVES

June 21, 2022

Mr. Pallone (for himself, Mrs. Rodgers of Washington, Ms. Schakowsky, and Mr. Bilirakis) introduced the following bill; which was referred to the Committee on Energy and Commerce

December 30, 2022

Reported with an amendment, committed to the Committee of the Whole House on the State of the Union, and ordered to be printed

[Strike out all after the enacting clause and insert the part printed in italic][For text of introduced bill, see copy of bill as introduced on June 21, 2022]

A BILL

To provide consumers with foundational data privacy rights, create strong oversight mechanisms, and establish meaningful enforcement.

Be it enacted by the Senate and House of Representatives of the United States of America in Congress assembled,

SECTION 1. Short title; table of contents.

(a) Short title.—This Act may be cited as the “American Data Privacy and Protection Act”.

(b) Table of contents.—The table of contents of this Act is as follows:

Sec. 1. Short title; table of contents.

Sec. 2. Definitions.

TITLE I—DUTY OF LOYALTY

Sec. 101. Data minimization.

Sec. 102. Loyalty duties.

Sec. 103. Privacy by design.

Sec. 104. Loyalty to individuals with respect to pricing.

TITLE II—CONSUMER DATA RIGHTS

Sec. 201. Consumer awareness.

Sec. 202. Transparency.

Sec. 203. Individual data ownership and control.

Sec. 204. Right to consent and object.

Sec. 205. Data protections for children and minors.

Sec. 206. Third-party collecting entities.

Sec. 207. Civil rights and algorithms.

Sec. 208. Data security and protection of covered data.

Sec. 209. Small business protections.

Sec. 210. Unified opt-out mechanisms.

TITLE III—CORPORATE ACCOUNTABILITY

Sec. 301. Executive responsibility.

Sec. 302. Service providers and third parties.

Sec. 303. Technical compliance programs.

Sec. 304. Commission approved compliance guidelines.

Sec. 305. Digital content forgeries.

TITLE IV—ENFORCEMENT, APPLICABILITY, AND MISCELLANEOUS

Sec. 401. Enforcement by the Federal Trade Commission.

Sec. 402. Enforcement by States.

Sec. 403. Enforcement by persons.

Sec. 404. Relationship to Federal and State laws.

Sec. 405. Severability.

Sec. 406. COPPA.

Sec. 407. Authorization of appropriations.

Sec. 408. Effective date.

SEC. 2. Definitions.

In this Act:

(1) AFFIRMATIVE EXPRESS CONSENT.—

(A) IN GENERAL.—The term “affirmative express consent” means an affirmative act by an individual that clearly communicates the individual’s freely given, specific, and unambiguous authorization for an act or practice after having been informed, in response to a specific request from a covered entity that meets the requirements of subparagraph (B).

(B) REQUEST REQUIREMENTS.—The requirements of this subparagraph with respect to a request from a covered entity to an individual are the following:

(i) The request is provided to the individual in a clear and conspicuous standalone disclosure made through the primary medium used to offer the covered entity’s product or service, or only if the product or service is not offered in a medium that permits the making of the request under this paragraph, another medium regularly used in conjunction with the covered entity’s product or service.

(ii) The request includes a description of the processing purpose for which the individual’s consent is sought and—

(I) clearly states the specific categories of covered data that the covered entity shall collect, process, and transfer necessary to effectuate the processing purpose; and

(II) includes a prominent heading and is written in easy-to-understand language that would enable a reasonable individual to identify and understand the processing purpose for which consent is sought and the covered data to be collected, processed, or transferred by the covered entity for such processing purpose.

(iii) The request clearly explains the individual’s applicable rights related to consent.

(iv) The request is made in a manner reasonably accessible to and usable by individuals with disabilities.

(v) The request is made available to the individual in each covered language in which the covered entity provides a product or service for which authorization is sought.

(vi) The option to refuse consent shall be at least as prominent as the option to accept, and the option to refuse consent shall take the same number of steps or fewer as the option to accept.

(vii) Processing or transferring any covered data collected pursuant to affirmative express consent for a different processing purpose than that for which affirmative express consent was obtained shall require affirmative express consent for the subsequent processing purpose.

(C) EXPRESS CONSENT REQUIRED.—A covered entity may not infer that an individual has provided affirmative express consent to an act or practice from the inaction of the individual or the individual’s continued use of a service or product provided by the covered entity.

(D) PRETEXTUAL CONSENT PROHIBITED.—A covered entity may not obtain or attempt to obtain the affirmative express consent of an individual through—

(i) the use of any false, fictitious, fraudulent, or materially misleading statement or representation; or

(ii) the design, modification, or manipulation of any user interface with the purpose or substantial effect of obscuring, subverting, or impairing a reasonable individual’s autonomy, decision making, or choice to provide such consent or any covered data.

(2) AUTHENTICATION.—The term “authentication” means the process of verifying an individual or entity for security purposes.

(3) BIOMETRIC INFORMATION.—

(A) IN GENERAL.—The term “biometric information” means any covered data generated from the technological processing of an individual’s unique biological, physical, or physiological characteristics that is linked or reasonably linkable to an individual, including—

(i) fingerprints;

(ii) voice prints;

(iii) iris or retina scans;

(iv) facial or hand mapping, geometry, or templates; or

(v) gait or personally identifying physical movements.

(B) EXCLUSION.—The term “biometric information” does not include—

(i) a digital or physical photograph;

(ii) an audio or video recording; or

(iii) data generated from a digital or physical photograph, or an audio or video recording, that cannot be used to identify an individual.

(4) COLLECT; COLLECTION.—The terms “collect” and “collection” mean buying, renting, gathering, obtaining, receiving, accessing, or otherwise acquiring covered data by any means.

(5) COMMISSION.—The term “Commission” means the Federal Trade Commission.

(6) CONTROL.—The term “control” means, with respect to an entity—

(A) ownership of, or the power to vote, more than 50 percent of the outstanding shares of any class of voting security of the entity;

(B) control over the election of a majority of the directors of the entity (or of individuals exercising similar functions); or

(C) the power to exercise a controlling influence over the management of the entity.

(7) COVERED ALGORITHM.—The term “covered algorithm” means a computational process that uses machine learning, natural language processing, artificial intelligence techniques, or other computational processing techniques of similar or greater complexity and that makes a decision or facilitates human decision-making with respect to covered data, including to determine the provision of products or services or to rank, order, promote, recommend, amplify, or similarly determine the delivery or display of information to an individual.

(8) COVERED DATA.—

(A) IN GENERAL.—The term “covered data” means information that identifies or is linked or reasonably linkable, alone or in combination with other information, to an individual or a device that identifies or is linked or reasonably linkable to an individual, and may include derived data and unique persistent identifiers.

(B) EXCLUSIONS.—The term “covered data” does not include—

(i) de-identified data;

(ii) employee data;

(iii) publicly available information; or

(iv) inferences made exclusively from multiple independent sources of publicly available information that do not reveal sensitive covered data with respect to an individual.

(C) EMPLOYEE DATA DEFINED.—For purposes of subparagraph (B), the term “employee data” means—

(i) information relating to a job applicant collected by a covered entity acting as a prospective employer of such job applicant in the course of the application, or hiring process, if such information is collected, processed, or transferred by the prospective employer solely for purposes related to the employee’s status as a current or former job applicant of such employer;

(ii) information processed by an employer relating to an employee who is acting in a professional capacity for the employer, provided that such information is collected, processed, or transferred solely for purposes related to such employee’s professional activities on behalf of the employer;

(iii) the business contact information of an employee, including the employee’s name, position or title, business telephone number, business address, or business email address that is provided to an employer by an employee who is acting in a professional capacity, if such information is collected, processed, or transferred solely for purposes related to such employee’s professional activities on behalf of the employer;

(iv) emergency contact information collected by an employer that relates to an employee of that employer, if such information is collected, processed, or transferred solely for the purpose of having an emergency contact on file for the employee and for processing or transferring such information in case of an emergency; or

(v) information relating to an employee (or a spouse, dependent, other covered family member, or beneficiary of such employee) that is necessary for the employer to collect, process, or transfer solely for the purpose of administering benefits to which such employee (or spouse, dependent, other covered family member, or beneficiary of such employee) is entitled on the basis of the employee’s position with that employer.

(9) COVERED ENTITY.—

(A) IN GENERAL.—The term “covered entity”—

(i) means any entity or any person, other than an individual acting in a non-commercial context, that alone or jointly with others determines the purposes and means of collecting, processing, or transferring covered data and—

(I) is subject to the Federal Trade Commission Act (15 U.S.C. 41 et seq.);

(II) is a common carrier subject to the Communications Act of 1934 (47 U.S.C. 151 et seq.) and all Acts amendatory thereof and supplementary thereto; or

(III) is an organization not organized to carry on business for its own profit or that of its members; and

(ii) includes any entity or person that controls, is controlled by, or is under common control with the covered entity.

(B) EXCLUSIONS.—The term “covered entity” does not include—

(i) a Federal, State, Tribal, territorial, or local government entity such as a body, authority, board, bureau, commission, district, agency, or political subdivision of the Federal Government or a State, Tribal, territorial, or local government;

(ii) a person or an entity that is collecting, processing, or transferring covered data on behalf of a Federal, State, Tribal, territorial, or local government entity, in so far as such person or entity is acting as a service provider to the government entity; or

(iii) an entity that serves as a congressionally designated nonprofit, national resource center, and clearinghouse to provide assistance to victims, families, child-serving professionals, and the general public on missing and exploited children issues.

(C) NON-APPLICATION TO SERVICE PROVIDERS.—An entity shall not be considered to be a covered entity for purposes of this Act in so far as the entity is acting as a service provider (as defined in paragraph (29)).

(10) COVERED LANGUAGE.—The term “covered language” means the ten languages with the most users in the United States, according to the most recent United States Census.

(11) COVERED MINOR.—The term “covered minor” means an individual under the age of 17.

(12) DE-IDENTIFIED DATA.—The term “de-identified data” means information that does not identify and is not linked or reasonably linkable to a distinct individual or a device, regardless of whether the information is aggregated, and if the covered entity or service provider—

(A) takes reasonable technical measures to ensure that the information cannot, at any point, be used to re-identify any individual or device that identifies or is linked or reasonably linkable to an individual;

(B) publicly commits in a clear and conspicuous manner—

(i) to process and transfer the information solely in a de-identified form without any reasonable means for re-identification; and

(ii) to not attempt to re-identify the information with any individual or device that identifies or is linked or reasonably linkable to an individual; and

(C) contractually obligates any person or entity that receives the information from the covered entity or service provider—

(i) to comply with all of the provisions of this paragraph with respect to the information; and

(ii) to require that such contractual obligations be included contractually in all subsequent instances for which the data may be received.

(13) DERIVED DATA.—The term “derived data” means covered data that is created by the derivation of information, data, assumptions, correlations, inferences, predictions, or conclusions from facts, evidence, or another source of information or data about an individual or an individual’s device.

(14) DEVICE.—The term “device” means any electronic equipment capable of collecting, processing, or transferring covered data that is used by one or more individuals.

(15) EMPLOYEE.—The term “employee” means an individual who is an employee, director, officer, staff member individual working as an independent contractor that is not a service provider, trainee, volunteer, or intern of an employer, regardless of whether such individual is paid, unpaid, or employed on a temporary basis.

(16) EXECUTIVE AGENCY.—The “Executive agency” has the meaning given such term in section 105 of title 5, United States Code.

(17) FIRST PARTY ADVERTISING OR MARKETING.—The term “first party advertising or marketing” means advertising or marketing conducted by a first party either through direct communications with a user such as direct mail, email, or text message communications, or advertising or marketing conducted entirely within the first-party context, such as in a physical location operated by the first party, or on a web site or app operated by the first party.

(18) GENETIC INFORMATION.—The term “genetic information” means any covered data, regardless of its format, that concerns an individual’s genetic characteristics, including—

(A) raw sequence data that results from the sequencing of the complete, or a portion of the, extracted deoxyribonucleic acid (DNA) of an individual; or

(B) genotypic and phenotypic information that results from analyzing raw sequence data described in subparagraph (A).

(19) INDIVIDUAL.—The term “individual” means a natural person residing in the United States.

(20) KNOWLEDGE.—

(A) IN GENERAL.—The term “knowledge” means—

(i) with respect to a covered entity that is a covered high-impact social media company, the entity knew or should have known the individual was a covered minor;

(ii) with respect to a covered entity or service provider that is a large data holder, and otherwise is not a covered high-impact social media company, that the covered entity knew or acted in willful disregard of the fact that the individual was a covered minor; and

(iii) with respect to a covered entity or service provider that does not meet the requirements of clause (i) or (ii), actual knowledge.

(B) COVERED HIGH-IMPACT SOCIAL MEDIA COMPANY.—For purposes of this paragraph, the term “covered high-impact social media company” means a covered entity that provides any internet-accessible platform where—

(i) such covered entity generates $3,000,000,000 or more in annual revenue;

(ii) such platform has 300,000,000 or more monthly active users for not fewer than 3 of the preceding 12 months on the online product or service of such covered entity; and

(iii) such platform constitutes an online product or service that is primarily used by users to access or share, user-generated content.

(21) LARGE DATA HOLDER.—

(A) IN GENERAL.—The term “large data holder” means a covered entity or service provider that, in the most recent calendar year—

(i) had annual gross revenues of $250,000,000 or more; and

(ii) collected, processed, or transferred—

(I) the covered data of more than 5,000,000 individuals or devices that identify or are linked or reasonably linkable to 1 or more individuals, excluding covered data collected and processed solely for the purpose of initiating, rendering, billing for, finalizing, completing, or otherwise collecting payment for a requested product or service; and

(II) the sensitive covered data of more than 200,000 individuals or devices that identify or are linked or reasonably linkable to 1 or more individuals.

(B) EXCLUSIONS.—The term “large data holder” does not include any instance in which the covered entity or service provider would qualify as a large data holder solely on the basis of collecting or processing—

(i) personal email addresses;

(ii) personal telephone numbers; or

(iii) log-in information of an individual or device to allow the individual or device to log in to an account administered by the covered entity or service provider.

(C) REVENUE.—For purposes of determining whether any covered entity or service provider is a large data holder, the term “revenue”, with respect to any covered entity or service provider that is not organized to carry on business for its own profit or that of its members—

(i) means the gross receipts the covered entity or service provider received, in whatever form, from all sources, without subtracting any costs or expenses; and

(ii) includes contributions, gifts, grants, dues or other assessments, income from investments, and proceeds from the sale of real or personal property.

(22) MARKET RESEARCH.—The term “market research” means the collection, processing, or transfer of covered data as reasonably necessary and proportionate to investigate the market for or marketing of products, services, or ideas, where the covered data is not—

(A) integrated into any product or service;

(B) otherwise used to contact any individual or individual’s device; or

(C) used to advertise or market to any individual or individual’s device.

(23) MATERIAL.—The term “material” means, with respect to an act, practice, or representation of a covered entity (including a representation made by the covered entity in a privacy policy or similar disclosure to individuals) involving the collection, processing, or transfer of covered data, that such act, practice, or representation is likely to affect a reasonable individual’s decision or conduct regarding a product or service.

(24) PRECISE GEOLOCATION INFORMATION.—

(A) IN GENERAL.—The term “precise geolocation information” means information that is derived from a device or technology that reveals the past or present physical location of an individual or device that identifies or is linked or reasonably linkable to 1 or more individuals, with sufficient precision to identify street level location information of an individual or device or the location of an individual or device within a range of 1,850 feet or less.

(B) EXCLUSION.—The term “precise geolocation information” does not include geolocation information identifiable or derived solely from the visual content of a legally obtained image, including the location of the device that captured such image.

(25) PROCESS.—The term “process” means to conduct or direct any operation or set of operations performed on covered data, including analyzing, organizing, structuring, retaining, storing, using, or otherwise handling covered data.

(26) PROCESSING PURPOSE.—The term “processing purpose” means a reason for which a covered entity or service provider collects, processes, or transfers covered data that is specific and granular enough for a reasonable individual to understand the material facts of how and why the covered entity or service provider collects, processes, or transfers the covered data.

(27) PUBLICLY AVAILABLE INFORMATION.—

(A) IN GENERAL.—The term “publicly available information” means any information that a covered entity or service provider has a reasonable basis to believe has been lawfully made available to the general public from—

(i) Federal, State, or local government records, if the covered entity collects, processes, and transfers such information in accordance with any restrictions or terms of use placed on the information by the relevant government entity;

(ii) widely distributed media;

(iii) a website or online service made available to all members of the public, for free or for a fee, including where all members of the public, for free or for a fee, can log in to the website or online service;

(iv) a disclosure that has been made to the general public as required by Federal, State, or local law; or

(v) the visual observation of the physical presence of an individual or a device in a public place, not including data collected by a device in the individual’s possession.

(B) CLARIFICATIONS; LIMITATIONS.—

(i) AVAILABLE TO ALL MEMBERS OF THE PUBLIC.—For purposes of this paragraph, information from a website or online service is not available to all members of the public if the individual who made the information available via the website or online service has restricted the information to a specific audience.

(ii) OTHER LIMITATIONS.—The term “publicly available information” does not include—

(I) any obscene visual depiction (as defined in section 1460 of title 18, United States Code);

(II) any inference made exclusively from multiple independent sources of publicly available information that reveals sensitive covered data with respect to an individual;

(III) biometric information;

(IV) publicly available information that has been combined with covered data;

(V) genetic information, unless otherwise made available by the individual to whom the information pertains as described in clause (ii) or (iii) of subparagraph (A); or

(VI) intimate images known to be nonconsensual.

(28) SENSITIVE COVERED DATA.—

(A) IN GENERAL.—The term “sensitive covered data” means the following types of covered data:

(i) A government-issued identifier, such as a Social Security number, passport number, or driver’s license number, that is not required by law to be displayed in public.

(ii) Any information that describes or reveals the past, present, or future physical health, mental health, disability, diagnosis, or healthcare condition or treatment of an individual.

(iii) A financial account number, debit card number, credit card number, or information that describes or reveals the income level or bank account balances of an individual, except that the last four digits of a debit or credit card number shall not be deemed sensitive covered data.

(iv) Biometric information.

(v) Genetic information.

(vi) Precise geolocation information.

(vii) An individual’s private communications such as voicemails, emails, texts, direct messages, or mail, or information identifying the parties to such communications, voice communications, video communications, and any information that pertains to the transmission of such communications, including telephone numbers called, telephone numbers from which calls were placed, the time calls were made, call duration, and location information of the parties to the call, unless the covered entity or a service provider acting on behalf of the covered entity is the sender or an intended recipient of the communication. Communications are not private for purposes of this clause if such communications are made from or to a device provided by an employer to an employee insofar as such employer provides conspicuous notice that such employer may access such communications.

(viii) Account or device log-in credentials, or security or access codes for an account or device.

(ix) Information identifying the sexual behavior of an individual in a manner inconsistent with the individual’s reasonable expectation regarding the collection, processing, or transfer of such information.

(x) Calendar information, address book information, phone or text logs, photos, audio recordings, or videos, maintained for private use by an individual, regardless of whether such information is stored on the individual’s device or is accessible from that device and is backed up in a separate location. Such information is not sensitive for purposes of this paragraph if such information is sent from or to a device provided by an employer to an employee insofar as such employer provides conspicuous notice that it may access such information.

(xi) A photograph, film, video recording, or other similar medium that shows the naked or undergarment-clad private area of an individual.

(xii) Information revealing the video content requested or selected by an individual collected by a covered entity that is not a provider of a service described in section 102(4). This clause does not include covered data used solely for transfers for independent video measurement.

(xiii) Information about an individual when the covered entity or service provider has knowledge that the individual is a covered minor.

(xiv) An individual’s race, color, ethnicity, religion, or union membership.

(xv) Information identifying an individual’s online activities over time and across third party websites or online services.

(xvi) Any other covered data collected, processed, or transferred for the purpose of identifying the types of covered data listed in clauses (i) through (xv).

(B) RULEMAKING.—The Commission may commence a rulemaking pursuant to section 553 of title 5, United States Code, to include in the definition of “sensitive covered data” any other type of covered data that may require a similar level of protection as the types of covered data listed in clauses (i) through (xvi) of subparagraph (A) as a result of any new method of collecting, processing, or transferring covered data.

(29) SERVICE PROVIDER.—

(A) IN GENERAL.—The term “service provider” means a person or entity that—

(i) collects, processes, or transfers covered data on behalf of, and at the direction of, a covered entity or a Federal, State, Tribal, territorial, or local government entity; and

(ii) receives covered data from or on behalf of a covered entity or a Federal, State, Tribal, territorial, or local government entity.

(B) TREATMENT WITH RESPECT TO SERVICE PROVIDER DATA.—A service provider that receives service provider data from another service provider as permitted under this Act shall be treated as a service provider under this Act with respect to such data.

(30) SERVICE PROVIDER DATA.—The term “service provider data” means covered data that is collected or processed by or has been transferred to a service provider by or on behalf of a covered entity, a Federal, State, Tribal, territorial, or local government entity, or another service provider for the purpose of allowing the service provider to whom such covered data is transferred to perform a service or function on behalf of, and at the direction of, such covered entity or Federal, State, Tribal, territorial, or local government entity.

(31) STATE.—The term “State” means any of the 50 States, the District of Columbia, the Commonwealth of Puerto Rico, the Virgin Islands of the United States, Guam, American Samoa, or the Commonwealth of the Northern Mariana Islands.

(32) STATE PRIVACY AUTHORITY.—The term “State privacy authority” means—

(A) the chief consumer protection officer of a State; or

(B) a State consumer protection agency with expertise in data protection, including the California Privacy Protection Agency.

(33) SUBSTANTIAL PRIVACY RISK.—The term “substantial privacy risk” means the collection, processing, or transfer of covered data in a manner that may result in any reasonably foreseeable substantial physical injury, economic injury, highly offensive intrusion into the privacy expectations of a reasonable individual under the circumstances, or discrimination on the basis of race, color, religion, national origin, sex, or disability.

(34) TARGETED ADVERTISING.—The term “targeted advertising”—

(A) means presenting to an individual or device identified by a unique identifier, or groups of individuals or devices identified by unique identifiers, an online advertisement that is selected based on known or predicted preferences, characteristics, or interests associated with the individual or a device identified by a unique identifier; and

(B) does not include—

(i) advertising or marketing to an individual or an individual’s device in response to the individual’s specific request for information or feedback;

(ii) contextual advertising, which is when an advertisement is displayed based on the content in which the advertisement appears and does not vary based on who is viewing the advertisement; or

(iii) processing covered data solely for measuring or reporting advertising or content, performance, reach, or frequency, including independent measurement.

(35) THIRD PARTY.—The term “third party”—

(A) means any person or entity, including a covered entity, that—

(i) collects, processes, or transfers covered data that the person or entity did not collect directly from the individual linked or linkable to such covered data; and

(ii) is not a service provider with respect to such data; and

(B) does not include a person or entity that collects covered data from another entity if the 2 entities are related by common ownership or corporate control, but only if a reasonable consumer’s reasonable expectation would be that such entities share information.

(36) THIRD-PARTY COLLECTING ENTITY.—

(A) IN GENERAL.—The term “third-party collecting entity”—

(i) means a covered entity whose principal source of revenue is derived from processing or transferring covered data that the covered entity did not collect directly from the individuals linked or linkable to the covered data; and

(ii) does not include a covered entity insofar as such entity processes employee data collected by and received from a third party concerning any individual who is an employee of the third party for the sole purpose of such third party providing benefits to the employee.

(B) PRINCIPAL SOURCE OF REVENUE DEFINED.—For purposes of this paragraph, the term “principal source of revenue” means, for the prior 12-month period, either—

(i) more than 50 percent of all revenue of the covered entity; or

(ii) obtaining revenue from processing or transferring the covered data of more than 5,000,000 individuals that the covered entity did not collect directly from the individuals linked or linkable to the covered data.

(C) NON-APPLICATION TO SERVICE PROVIDERS.—An entity may not be considered to be a third-party collecting entity for purposes of this Act if the entity is acting as a service provider.

(37) THIRD PARTY DATA.—The term “third party data” means covered data that has been transferred to a third party.

(38) TRANSFER.—The term “transfer” means to disclose, release, disseminate, make available, license, rent, or share covered data orally, in writing, electronically, or by any other means.

(39) UNIQUE PERSISTENT IDENTIFIER.—The term “unique identifier”—

(A) means an identifier to the extent that such identifier is reasonably linkable to an individual or device that identifies or is linked or reasonably linkable to 1 or more individuals, including a device identifier, Internet Protocol address, cookie, beacon, pixel tag, mobile ad identifier, or similar technology, customer number, unique pseudonym, user alias, telephone number, or other form of persistent or probabilistic identifier that is linked or reasonably linkable to an individual or device; and

(B) does not include an identifier assigned by a covered entity for the specific purpose of giving effect to an individual’s exercise of affirmative express consent or opt-outs of the collection, processing, and transfer of covered data pursuant to section 204 or otherwise limiting the collection, processing, or transfer of such information.

(40) WIDELY DISTRIBUTED MEDIA.—The term “widely distributed media” means information that is available to the general public, including information from a telephone book or online directory, a television, internet, or radio program, the news media, or an internet site that is available to the general public on an unrestricted basis, but does not include an obscene visual depiction (as defined in section 1460 of title 18, United States Code).

TITLE I—Duty of Loyalty

SEC. 101. Data minimization.

(a) In general.—A covered entity may not collect, process, or transfer covered data unless the collection, processing, or transfer is limited to what is reasonably necessary and proportionate to—

(1) provide or maintain a specific product or service requested by the individual to whom the data pertains; or

(2) effect a purpose permitted under subsection (b).

(b) Permissible purposes.—A covered entity may collect, process, or transfer covered data for any of the following purposes if the collection, processing, or transfer is limited to what is reasonably necessary and proportionate to such purpose:

(1) To initiate, manage, or complete a transaction or fulfill an order for specific products or services requested by an individual, including any associated routine administrative, operational, and account-servicing activity such as billing, shipping, delivery, storage, and accounting.

(2) With respect to covered data previously collected in accordance with this Act, notwithstanding this exception—

(A) to process such data as necessary to perform system maintenance or diagnostics;

(B) to develop, maintain, repair, or enhance a product or service for which such data was collected;

(C) to conduct internal research or analytics to improve a product or service for which such data was collected;

(D) to perform inventory management or reasonable network management;

(E) to protect against spam; or

(F) to debug or repair errors that impair the functionality of a service or product for which such data was collected.

(3) To authenticate users of a product or service.

(4) To fulfill a product or service warranty.

(5) To prevent, detect, protect against, or respond to a security incident. For purposes of this paragraph, security is defined as network security and physical security and life safety, including an intrusion or trespass, medical alerts, fire alarms, and access control security.

(6) To prevent, detect, protect against, or respond to fraud, harassment, or illegal activity. For purposes of this paragraph, the term “illegal activity” means a violation of a Federal, State, or local law punishable as a felony or misdemeanor that can directly harm.

(7) To comply with a legal obligation imposed by Federal, Tribal, local, or State law, or to investigate, establish, prepare for, exercise, or defend legal claims involving the covered entity or service provider.

(8) To prevent an individual, or group of individuals, from suffering harm where the covered entity or service provider believes in good faith that the individual, or group of individuals, is at risk of death, serious physical injury, or other serious health risk.

(9) To effectuate a product recall pursuant to Federal or State law.

(10) (A) To conduct a public or peer-reviewed scientific, historical, or statistical research project that—

(i) is in the public interest; and

(ii) adheres to all relevant laws and regulations governing such research, including regulations for the protection of human subjects, or is excluded from criteria of the institutional review board.

(B) Not later than 18 months after the date of enactment of this Act, the Commission should issue guidelines to help covered entities ensure the privacy of affected users and the security of covered data, particularly as data is being transferred to and stored by researchers. Such guidelines should consider risks as they pertain to projects using covered data with special considerations for projects that are exempt under part 46 of title 45, Code of Federal Regulations (or any successor regulation) or are excluded from the criteria for institutional review board review.

(11) To deliver a communication that is not an advertisement to an individual, if the communication is reasonably anticipated by the individual within the context of the individual’s interactions with the covered entity.

(12) To deliver a communication at the direction of an individual between such individual and one or more individuals or entities.

(13) To transfer assets to a third party in the context of a merger, acquisition, bankruptcy, or similar transaction when the third party assumes control, in whole or in part, of the covered entity’s assets, only if the covered entity, in a reasonable time prior to such transfer, provides each affected individual with—

(A) a notice describing such transfer, including the name of the entity or entities receiving the individual’s covered data and their privacy policies as described in section 202; and

(B) a reasonable opportunity to withdraw any previously given consents in accordance with the requirements of affirmative express consent under this Act related to the individual’s covered data and a reasonable opportunity to request the deletion of the individual’s covered data, as described in section 203.

(14) To ensure the data security and integrity of covered data, as described in section 208.

(15) With respect to covered data previously collected in accordance with this Act, a service provider acting at the direction of a government entity, or a service provided to a government entity by a covered entity, and only insofar as authorized by statute, to prevent, detect, protect against or respond to a public safety incident, including trespass, natural disaster, or national security incident. This paragraph does not permit, however, the transfer of covered data for payment or other valuable consideration to a government entity.

(16) With respect to covered data collected in accordance with this Act, notwithstanding this exception, to process such data as necessary to provide first party advertising or marketing of products or services provided by the covered entity for individuals who are not-covered minors.

(17) With respect to covered data previously collected in accordance with this Act, notwithstanding this exception and provided such collection, processing, and transferring otherwise complies with the requirements of this Act, including section 204(c), to provide targeted advertising.

(c) Guidance.—The Commission shall issue guidance regarding what is reasonably necessary and proportionate to comply with this section. Such guidance shall take into consideration—

(1) the size of, and the nature, scope, and complexity of the activities engaged in by, the covered entity, including whether the covered entity is a large data holder, nonprofit organization, covered entity meeting the requirements of section 209, third party, or third-party collecting entity;

(2) the sensitivity of covered data collected, processed, or transferred by the covered entity;

(3) the volume of covered data collected, processed, or transferred by the covered entity; and

(4) the number of individuals and devices to which the covered data collected, processed, or transferred by the covered entity relates.

(d) Deceptive marketing of a product or service.—A covered entity or service provider may not engage in deceptive advertising or marketing with respect to a product or service offered to an individual.

(e) Journalism.—Nothing in this Act shall be construed to limit or diminish First Amendment freedoms guaranteed under the Constitution.

SEC. 102. Loyalty duties.

Notwithstanding section 101 and unless an exception applies, with respect to covered data, a covered entity or service provider may not—

(1) collect, process, or transfer a Social Security number, except when necessary to facilitate an extension of credit, authentication, fraud and identity fraud detection and prevention, the payment or collection of taxes, the enforcement of a contract between parties, or the prevention, investigation, or prosecution of fraud or illegal activity, or as otherwise required by Federal, State, or local law;

(2) collect or process sensitive covered data, except where such collection or processing is strictly necessary to provide or maintain a specific product or service requested by the individual to whom the covered data pertains, or is strictly necessary to effect a purpose enumerated in paragraphs (1) through (12) and (14) through (15) of section 101(b);

(3) transfer an individual’s sensitive covered data to a third party, unless—

(A) the transfer is made pursuant to the affirmative express consent of the individual;

(B) the transfer is necessary to comply with a legal obligation imposed by Federal, State, Tribal, or local law, or to establish, exercise, or defend legal claims;

(C) the transfer is necessary to prevent an individual from imminent injury where the covered entity believes in good faith that the individual is at risk of death, serious physical injury, or serious health risk;

(D) with respect to covered data collected in accordance with this Act, notwithstanding this exception, a service provider acting at the direction of a government entity, or a service provided to a government entity by a covered entity, and only insofar as authorized by statute, the transfer is necessary to prevent, detect, protect against or respond to a public safety incident including trespass, natural disaster, or national security incident. This paragraph does not permit, however, the transfer of covered data for payment or other valuable consideration to a government entity;

(E) in the case of the transfer of a password, the transfer is necessary to use a designated password manager or is to a covered entity for the exclusive purpose of identifying passwords that are being re-used across sites or accounts;

(F) in the case of the transfer of genetic information, the transfer is necessary to perform a medical diagnosis or medical treatment specifically requested by an individual, or to conduct medical research in accordance with conditions of section 101(b)(10); or

(G) to transfer assets in the manner described in paragraph (13) of section 101(b); or

(4) in the case of a provider of broadcast television service, cable service, satellite service, streaming media service, or other video programming service described in section 713(h)(2) of the Communications Act of 1934 (47 U.S.C. 613(h)(2)), transfer to an unaffiliated third party covered data that reveals the video content or services requested or selected by an individual from such service, except with the affirmative express consent of the individual or pursuant to one of the permissible purposes enumerated in paragraphs (1) through (15) of section 101(b).

SEC. 103. Privacy by design.

(a) Policies, practices, and procedures.—A covered entity and a service provider shall establish, implement, and maintain reasonable policies, practices, and procedures that reflect the role of the covered entity or service provider in the collection, processing, and transferring of covered data and that—

(1) consider applicable Federal laws, rules, or regulations related to covered data the covered entity or service provider collects, processes, or transfers;

(2) identify, assess, and mitigate privacy risks related to covered minors (including, if applicable, with respect to a covered entity that is not an entity meeting the requirements of section 209, in a manner that considers the developmental needs of different age ranges of covered minors) to result in reasonably necessary and proportionate residual risk to covered minors;

(3) mitigate privacy risks, including substantial privacy risks, related to the products and services of the covered entity or the service provider, including in the design, development, and implementation of such products and services, taking into account the role of the covered entity or service provider and the information available to it; and

(4) implement reasonable training and safeguards within the covered entity and service provider to promote compliance with all privacy laws applicable to covered data the covered entity collects, processes, or transfers or covered data the service provider collects, processes, or transfers on behalf of the covered entity and mitigate privacy risks, including substantial privacy risks, taking into account the role of the covered entity or service provider and the information available to it.

(b) Factors to consider.—The policies, practices, and procedures established by a covered entity and a service provider under subsection (a), shall correspond with, as applicable—

(1) the size of the covered entity or the service provider and the nature, scope, and complexity of the activities engaged in by the covered entity or service provider, including whether the covered entity or service provider is a large data holder, nonprofit organization, entity meeting the requirements of section 209, third party, or third-party collecting entity, taking into account the role of the covered entity or service provider and the information available to it;

(2) the sensitivity of the covered data collected, processed, or transferred by the covered entity or service provider;

(3) the volume of covered data collected, processed, or transferred by the covered entity or service provider;

(4) the number of individuals and devices to which the covered data collected, processed, or transferred by the covered entity or service provider relates; and

(5) the cost of implementing such policies, practices, and procedures in relation to the risks and nature of the covered data.

(c) Commission guidance.—Not later than 1 year after the date of enactment of this Act, the Commission shall issue guidance as to what constitutes reasonable policies, practices, and procedures as required by this section. The Commission shall consider unique circumstances applicable to nonprofit organizations, to entities meeting the requirements of section 209, and to service providers.

SEC. 104. Loyalty to individuals with respect to pricing.

(a) Retaliation through service or pricing prohibited.—A covered entity may not retaliate against an individual for exercising any of the rights guaranteed by the Act, or any regulations promulgated under this Act, including denying goods or services, charging different prices or rates for goods or services, or providing a different level of quality of goods or services.

(b) Rules of construction.—Nothing in subsection (a) may be construed to—

(1) prohibit the relation of the price of a service or the level of service provided to an individual to the provision, by the individual, of financial information that is necessarily collected and processed only for the purpose of initiating, rendering, billing for, or collecting payment for a service or product requested by the individual;

(2) prohibit a covered entity from offering a different price, rate, level, quality or selection of goods or services to an individual, including offering goods or services for no fee, if the offering is in connection with an individual’s voluntary participation in a bona fide loyalty program;

(3) require a covered entity to provide a bona fide loyalty program that would require the covered entity to collect, process, or transfer covered data that the covered entity otherwise would not collect, process, or transfer;

(4) prohibit a covered entity from offering a financial incentive or other consideration to an individual for participation in market research;

(5) prohibit a covered entity from offering different types of pricing or functionalities with respect to a product or service based on an individual’s exercise of a right under section 203(a)(3); or

(6) prohibit a covered entity from declining to provide a product or service insofar as the collection and processing of covered data is strictly necessary for such product or service.

(c) Bona fide loyalty program defined.—For purposes of this section, the term “bona fide loyalty program” includes rewards, premium features, discount or club card programs.

TITLE II—Consumer Data Rights

SEC. 201. Consumer awareness.

(a) In general.—Not later than 90 days after the date of enactment of this Act, the Commission shall publish, on the public website of the Commission, a webpage that describes each provision, right, obligation, and requirement of this Act, listed separately for individuals and for covered entities and service providers, and the remedies, exemptions, and protections associated with this Act, in plain and concise language and in an easy-to-understand manner.

(b) Updates.—The Commission shall update the information published under subsection (a) on a quarterly basis as necessitated by any change in law, regulation, guidance, or judicial decisions.

(c) Accessibility.—The Commission shall publish the information required to be published under subsection (a) in the ten languages with the most users in the United States, according to the most recent United States Census.

SEC. 202. Transparency.

(a) In general.—Each covered entity shall make publicly available, in a clear, conspicuous, not misleading, and easy-to-read and readily accessible manner, a privacy policy that provides a detailed and accurate representation of the data collection, processing, and transfer activities of the covered entity.

(b) Content of privacy policy.—A covered entity or service provider shall have a privacy policy that includes, at a minimum, the following:

(1) The identity and the contact information of—

(A) the covered entity or service provider to which the privacy policy applies (including the covered entity’s or service provider’s points of contact and generic electronic mail addresses, as applicable for privacy and data security inquiries); and

(B) any other entity within the same corporate structure as the covered entity or service provider to which covered data is transferred by the covered entity.

(2) The categories of covered data the covered entity or service provider collects or processes.

(3) The processing purposes for each category of covered data the covered entity or service provider collects or processes.

(4) Whether the covered entity or service provider transfers covered data and, if so, each category of service provider and third party to which the covered entity or service provider transfers covered data, the name of each third-party collecting entity to which the covered entity or service provider transfers covered data, and the purposes for which such data is transferred to such categories of service providers and third parties or third-party collecting entities, except for a transfer to a governmental entity pursuant to a court order or law that prohibits the covered entity or service provider from disclosing such transfer, except for transfers to governmental entities pursuant to a court order or law that prohibits the covered entity from disclosing the transfer.

(5) The length of time the covered entity or service provider intends to retain each category of covered data, including sensitive covered data, or, if it is not possible to identify that timeframe, the criteria used to determine the length of time the covered entity or service provider intends to retain categories of covered data.

(6) A prominent description of how an individual can exercise the rights described in this Act.

(7) A general description of the covered entity’s or service provider’s data security practices.

(8) The effective date of the privacy policy.

(9) Whether or not any covered data collected by the covered entity or service provider is transferred to, processed in, stored in, or otherwise accessible to the People’s Republic of China, Russia, Iran, or North Korea.

(c) Languages.—The privacy policy required under subsection (a) shall be made available to the public in each covered language in which the covered entity or service provider—

(1) provides a product or service that is subject to the privacy policy; or

(2) carries out activities related to such product or service.

(d) Accessibility.—The covered entity or service provider shall also provide the disclosures under this section in a manner that is reasonably accessible to and usable by individuals with disabilities.

(e) Material changes.—

(1) AFFIRMATIVE EXPRESS CONSENT.—If a covered entity makes a material change to its privacy policy or practices, the covered entity shall notify each individual affected by such material change before implementing the material change with respect to any prospectively collected covered data and, except as provided in paragraphs (1) through (15) of section 101(b), provide a reasonable opportunity for each individual to withdraw consent to any further materially different collection, processing, or transfer of previously collected covered data under the changed policy.

(2) NOTIFICATION.—The covered entity shall take all reasonable electronic measures to provide direct notification regarding material changes to the privacy policy to each affected individual, in each covered language in which the privacy policy is made available, and taking into account available technology and the nature of the relationship.

(3) CLARIFICATION.—Nothing in this section may be construed to affect the requirements for covered entities under section 102 or 204.

(4) LOG OF MATERIAL CHANGES.—Each large data holder shall retain copies of previous versions of its privacy policy for at least 10 years beginning after the date of enactment of this Act and publish them on its website. Such large data holder shall make publicly available, in a clear, conspicuous, and readily accessible manner, a log describing the date and nature of each material change to its privacy policy over the past 10 years. The descriptions shall be sufficient for a reasonable individual to understand the material effect of each material change. The obligations in this paragraph shall not apply to any previous versions of a large data holder’s privacy policy, or any material changes to such policy, that precede the date of enactment of this Act.

(f) Short-form notice to consumers by large data holders.—

(1) IN GENERAL.—In addition to the privacy policy required under subsection (a), a large data holder that is a covered entity shall provide a short-form notice of its covered data practices in a manner that is—

(A) concise, clear, conspicuous, and not misleading;

(B) readily accessible to the individual, based on what is reasonably anticipated within the context of the relationship between the individual and the large data holder;

(C) inclusive of an overview of individual rights and disclosures to reasonably draw attention to data practices that may reasonably be unexpected to a reasonable person or that involve sensitive covered data; and

(D) no more than 500 words in length.

(2) RULEMAKING.—The Commission shall issue a rule pursuant to section 553 of title 5, United States Code, establishing the minimum data disclosures necessary for the short-form notice required under paragraph (1), which shall not exceed the content requirements in subsection (b) and shall include templates or models of short-form notices.

SEC. 203. Individual data ownership and control.

(a) Access to, and correction, deletion, and portability of, covered data.—In accordance with subsections (b) and (c), a covered entity shall provide an individual, after receiving a verified request from the individual, with the right to—

(1) access—

(A) in a human-readable format that a reasonable individual can understand and download from the internet, the covered data (except covered data in a back-up or archival system) of the individual making the request that is collected, processed, or transferred by the covered entity or any service provider of the covered entity within the 24 months preceding the request;

(B) the categories of any third party, if applicable, and an option for consumers to obtain the names of any such third party as well as and the categories of any service providers to whom the covered entity has transferred for consideration the covered data of the individual, as well as the categories of sources from which the covered data was collected; and

(C) a description of the purpose for which the covered entity transferred the covered data of the individual to a third party or service provider;

(2) correct any verifiable substantial inaccuracy or substantially incomplete information with respect to the covered data of the individual that is processed by the covered entity and instruct the covered entity to make reasonable efforts to notify all third parties or service providers to which the covered entity transferred such covered data of the corrected information;

(3) delete covered data of the individual that is processed by the covered entity and instruct the covered entity to make reasonable efforts to notify all third parties or service provider to which the covered entity transferred such covered data of the individual’s deletion request; and

(4) to the extent technically feasible, export to the individual or directly to another entity the covered data of the individual that is processed by the covered entity, including inferences linked or reasonably linkable to the individual but not including other derived data, without licensing restrictions that limit such transfers in—

(A) a human-readable format that a reasonable individual can understand and download from the internet; and

(B) a portable, structured, interoperable, and machine-readable format.

(b) Individual autonomy.—A covered entity may not condition, effectively condition, attempt to condition, or attempt to effectively condition the exercise of a right described in subsection (a) through—

(1) the use of any false, fictitious, fraudulent, or materially misleading statement or representation; or

(2) the design, modification, or manipulation of any user interface with the purpose or substantial effect of obscuring, subverting, or impairing a reasonable individual’s autonomy, decision making, or choice to exercise such right.

(c) Timing.—

(1) IN GENERAL.—Subject to subsections (d) and (e), each request under subsection (a) shall be completed by any—

(A) large data holder within 45 days of such request from an individual, unless it is demonstrably impracticable or impracticably costly to verify such individual;

(B) covered entity that is not a large data holder or a covered entity meeting the requirements of section 209 within 60 days of such request from an individual, unless it is demonstrably impracticable or impracticably costly to verify such individual; or

(C) covered entity meeting the requirements of section 209 within 90 days of such request from an individual, unless it is demonstrably impracticable or impracticably costly to verify such individual.

(2) EXTENSION.—A response period set forth in this subsection may be extended once by 45 additional days when reasonably necessary, considering the complexity and number of the individual’s requests, so long as the covered entity informs the individual of any such extension within the initial 45-day response period, together with the reason for the extension.

(d) Frequency and cost of access.—A covered entity—

(1) shall provide an individual with the opportunity to exercise each of the rights described in subsection (a); and

(2) with respect to—

(A) the first 2 times that an individual exercises any right described in subsection (a) in any 12-month period, shall allow the individual to exercise such right free of charge; and

(B) any time beyond the initial 2 times described in subparagraph (A), may allow the individual to exercise such right for a reasonable fee for each request.

(e) Verification and exceptions.—

(1) REQUIRED EXCEPTIONS.—A covered entity may not permit an individual to exercise a right described in subsection (a), in whole or in part, if the covered entity—

(A) cannot reasonably verify that the individual making the request to exercise the right is the individual whose covered data is the subject of the request or an individual authorized to make such a request on the individual’s behalf;

(B) reasonably believes that the request is made to interfere with a contract between the covered entity and another individual;

(C) determines that the exercise of the right would require access to or correction of another individual’s sensitive covered data;

(D) reasonably believes that the exercise of the right would require the covered entity to engage in an unfair or deceptive practice under section 5 of the Federal Trade Commission Act (15 U.S.C. 45); or

(E) reasonably believes that the request is made to further fraud, support criminal activity, or the exercise of the right presents a data security threat.

(2) ADDITIONAL INFORMATION.—If a covered entity cannot reasonably verify that a request to exercise a right described in subsection (a) is made by the individual whose covered data is the subject of the request (or an individual authorized to make such a request on the individual’s behalf), the covered entity—

(A) may request that the individual making the request to exercise the right provide any additional information necessary for the sole purpose of verifying the identity of the individual; and

(B) may not process or transfer such additional information for any other purpose.

(3) PERMISSIVE EXCEPTIONS.—

(A) IN GENERAL.—A covered entity may decline, with adequate explanation to the individual, to comply with a request to exercise a right described in subsection (a), in whole or in part, that would—

(i) require the covered entity to retain any covered data collected for a single, one-time transaction, if such covered data is not processed or transferred by the covered entity for any purpose other than completing such transaction;

(ii) be demonstrably impracticable or prohibitively costly to comply with, and the covered entity shall provide a description to the requestor detailing the inability to comply with the request;

(iii) require the covered entity to attempt to re-identify de-identified data;

(iv) require the covered entity to maintain covered data in an identifiable form or collect, retain, or access any data in order to be capable of associating a verified individual request with covered data of such individual;

(v) result in the release of trade secrets or other privileged or confidential business information;

(vi) require the covered entity to correct any covered data that cannot be reasonably verified as being inaccurate or incomplete;

(vii) interfere with law enforcement, judicial proceedings, investigations, or reasonable efforts to guard against, detect, prevent, or investigate fraudulent, malicious, or unlawful activity, or enforce valid contracts;

(viii) violate Federal or State law or the rights and freedoms of another individual, including under the Constitution of the United States;

(ix) prevent a covered entity from being able to maintain a confidential record of deletion requests, maintained solely for the purpose of preventing covered data of an individual from being recollected after the individual submitted a deletion request and requested that the covered entity no longer collect, process, or transfer such data;

(x) fall within an exception enumerated in the regulations promulgated by the Commission pursuant to subparagraph (D); or

(xi) with respect to requests for deletion—

(I) unreasonably interfere with the provision of products or services by the covered entity to another person it currently serves;

(II) delete covered data that relates to a public figure and for which the requesting individual has no reasonable expectation of privacy;

(III) delete covered data reasonably necessary to perform a contract between the covered entity and the individual;

(IV) delete covered data that the covered entity needs to retain in order to comply with professional ethical obligations;

(V) delete covered data that the covered entity reasonably believes may be evidence of unlawful activity or an abuse of the covered entity’s products or services; or

(VI) for private elementary and secondary schools as defined by State law and private institutions of higher education as defined by title I of the Higher Education Act of 1965, delete covered data that would unreasonably interfere with the provision of education services by or the ordinary operation of the school or institution.

(B) PARTIAL COMPLIANCE.—In a circumstance that would allow a denial pursuant to subparagraph (A), a covered entity shall partially comply with the remainder of the request if it is possible and not unduly burdensome to do so.

(C) NUMBER OF REQUESTS.—For purposes of subparagraph (A)(ii), the receipt of a large number of verified requests, on its own, may not be considered to render compliance with a request demonstrably impracticable.

(D) FURTHER EXCEPTIONS.—The Commission may, by regulation as described in subsection (g), establish additional permissive exceptions necessary to protect the rights of individuals, alleviate undue burdens on covered entities, prevent unjust or unreasonable outcomes from the exercise of access, correction, deletion, or portability rights, or as otherwise necessary to fulfill the purposes of this section. In establishing such exceptions, the Commission should consider any relevant changes in technology, means for protecting privacy and other rights, and beneficial uses of covered data by covered entities.

(f) Large data holder metrics reporting.—A large data holder that is a covered entity shall, for each calendar year in which it was a large data holder, do the following:

(1) Compile the following metrics for the prior calendar year:

(A) The number of verified access requests under subsection (a)(1).

(B) The number of verified deletion requests under subsection (a)(3).

(C) The number of requests to opt-out of covered data transfers under section 204(b).

(D) The number of requests to opt-out of targeted advertising under section 204(c).

(E) The number of requests in each of subparagraphs (A) through (D) that such large data holder (i) complied with in whole or in part and (ii) denied.

(F) The median or mean number of days within which such large data holder substantively responded to the requests in each of subparagraphs (A) through (D).

(2) Disclose by July 1 of each applicable calendar year the information compiled in paragraph (1) within such large data holder’s privacy policy required under section 202 or on the publicly accessible website of such large data holder that is accessible from a hyperlink included in the privacy policy.

(g) Regulations.—Not later than 2 years after the date of enactment of this Act, the Commission shall promulgate regulations, pursuant to section 553 of title 5, United States Code, as necessary to establish processes by which covered entities are to comply with the provisions of this section. Such regulations shall take into consideration—

(1) the size of, and the nature, scope, and complexity of the activities engaged in by the covered entity, including whether the covered entity is a large data holder, nonprofit organization, covered entity meeting the requirements of section 209, third party, or third-party collecting entity;

(2) the sensitivity of covered data collected, processed, or transferred by the covered entity;

(3) the volume of covered data collected, processed, or transferred by the covered entity;

(4) the number of individuals and devices to which the covered data collected, processed, or transferred by the covered entity relates; and

(5) after consulting the National Institute of Standards and Technology, standards for ensuring the deletion of covered data under this Act where appropriate.

(h) Accessibility.—A covered entity shall facilitate the ability of individuals to make requests under subsection (a) in any covered language in which the covered entity provides a product or service. The mechanisms by which a covered entity enables individuals to make requests under subsection (a) shall be readily accessible and usable by with individuals with disabilities.

SEC. 204. Right to consent and object.

(a) Withdrawal of consent.—A covered entity shall provide an individual with a clear and conspicuous, easy-to-execute means to withdraw any affirmative express consent previously provided by the individual that is as easy to execute by a reasonable individual as the means to provide consent, with respect to the processing or transfer of the covered data of the individual.

(b) Right to opt out of covered data transfers.—

(1) IN GENERAL.—A covered entity—

(A) may not transfer or direct the transfer of the covered data of an individual to a third party if the individual objects to the transfer; and

(B) shall allow an individual to object to such a transfer through an opt-out mechanism, as described in section 210.

(2) EXCEPTION.—Except as provided in section 206(b)(3)(C), a covered entity need not allow an individual to opt out of the collection, processing, or transfer of covered data made pursuant to the exceptions in paragraphs (1) through (15) of section 101(b).

(c) Right to opt out of targeted advertising.—

(1) A covered entity or service provider that directly delivers a targeted advertisement shall—

(A) prior to engaging in targeted advertising to an individual or device and at all times thereafter, provide such individual with a clear and conspicuous means to opt out of targeted advertising;

(B) abide by any opt-out designation by an individual with respect to targeted advertising and notify the covered entity that directed the service provider to deliver the targeted advertisement of the opt-out decision; and

(C) allow an individual to make an opt-out designation with respect to targeted advertising through an opt-out mechanism, as described in section 210.

(2) A covered entity or service provider that receives an opt-out notification pursuant to paragraph (1)(B) or this paragraph shall abide by such opt-out designations by an individual and notify any other person that directed the covered entity or service provider to serve, deliver, or otherwise handle the advertisement of the opt-out decision.

(d) Individual autonomy.—A covered entity may not condition, effectively condition, attempt to condition, or attempt to effectively condition the exercise of any individual right under this section through—

(1) the use of any false, fictitious, fraudulent, or materially misleading statement or representation; or

(2) the design, modification, or manipulation of any user interface with the purpose or substantial effect of obscuring, subverting, or impairing a reasonable individual’s autonomy, decision making, or choice to exercise any such right.

SEC. 205. Data protections for children and minors.

(a) Prohibition on targeted advertising to children and minors.—A covered entity may not engage in targeted advertising to any individual if the covered entity has knowledge that the individual is a covered minor.

(b) Data transfer requirements related to covered minors.—

(1) IN GENERAL.—A covered entity may not transfer or direct the transfer of the covered data of a covered minor to a third party if the covered entity—

(A) has knowledge that the individual is a covered minor; and

(B) has not obtained affirmative express consent from the covered minor or the covered minor’s parent or guardian.

(2) EXCEPTION.—A covered entity or service provider may collect, process, or transfer covered data of an individual the covered entity or service provider knows is under the age of 18 solely in order to submit information relating to child victimization to law enforcement or to the nonprofit, national resource center and clearinghouse congressionally designated to provide assistance to victims, families, child-serving professionals, and the general public on missing and exploited children issues.

(c) Youth privacy and marketing division.—

(1) ESTABLISHMENT.—There is established within the Commission in the privacy bureau established in this Act, a division to be known as the “Youth Privacy and Marketing Division” (in this section referred to as the “Division”).

(2) DIRECTOR.—The Division shall be headed by a Director, who shall be appointed by the Chair of the Commission.

(3) DUTIES.—The Division shall be responsible for assisting the Commission in addressing, as it relates to this Act—

(A) the privacy of children and minors; and

(B) marketing directed at children and minors.

(4) STAFF.—The Director of the Division shall hire adequate staff to carry out the duties described in paragraph (3), including by hiring individuals who are experts in data protection, digital advertising, data analytics, and youth development.

(5) REPORTS.—Not later than 2 years after the date of enactment of this Act, and annually thereafter, the Commission shall submit to the Committee on Commerce, Science, and Transportation of the Senate and the Committee on Energy and Commerce of the House of Representatives a report that includes—

(A) a description of the work of the Division regarding emerging concerns relating to youth privacy and marketing practices; and

(B) an assessment of how effectively the Division has, during the period for which the report is submitted, assisted the Commission to address youth privacy and marketing practices.

(6) PUBLICATION.—Not later than 10 days after the date on which a report is submitted under paragraph (5), the Commission shall publish the report on its website.

(d) Report by the inspector general.—

(1) IN GENERAL.—Not later than 2 years after the date of enactment of this Act, and biennially thereafter, the Inspector General of the Commission shall submit to the Commission and to the Committee on Commerce, Science, and Transportation of the Senate and the Committee on Energy and Commerce of the House of Representatives a report regarding the safe harbor provisions in section 1304 of the Children’s Online Privacy Protection Act of 1998 (15 U.S.C. 6503), which shall include—

(A) an analysis of whether the safe harbor provisions are—

(i) operating fairly and effectively; and

(ii) effectively protecting the interests of children and minors; and

(B) any proposal or recommendation for policy changes that would improve the effectiveness of the safe harbor provisions.

(2) PUBLICATION.—Not later than 10 days after the date on which a report is submitted under paragraph (1), the Commission shall publish the report on the website of the Commission.

SEC. 206. Third-party collecting entities.

(a) Notice.—Each third-party collecting entity shall place a clear, conspicuous, not misleading, and readily accessible notice on the website or mobile application of the third-party collecting entity (if the third-party collecting entity maintains such a website or mobile application) that—

(1) notifies individuals that the entity is a third-party collecting entity using specific language that the Commission shall develop through rulemaking under section 553 of title 5, United States Code;

(2) includes a link to the website established under subsection (b)(3); and

(3) is reasonably accessible to and usable by individuals with disabilities.

(b) Third-party collecting entity registration.—

(1) IN GENERAL.—Not later than January 31 of each calendar year that follows a calendar year during which a covered entity acted as a third-party collecting entity and processed covered data pertaining to more than 5,000 individuals or devices that identify or are linked or reasonably linkable to an individual, such covered entity shall register with the Commission in accordance with this subsection.

(2) REGISTRATION REQUIREMENTS.—In registering with the Commission as required under paragraph (1), a third-party collecting entity shall do the following:

(A) Pay to the Commission a registration fee of $100.

(B) Provide the Commission with the following information:

(i) The legal name and primary physical, email, and internet addresses of the third-party collecting entity.

(ii) A description of the categories of covered data the third-party collecting entity processes and transfers.

(iii) The contact information of the third-party collecting entity, including a contact person, a telephone number, an e-mail address, a website, and a physical mailing address.

(iv) A link to a website through which an individual may easily exercise the rights provided under this subsection.

(3) THIRD-PARTY COLLECTING ENTITY REGISTRY.—The Commission shall establish and maintain on a website a searchable, publicly available, central registry of third-party collecting entities that are registered with the Commission under this subsection that includes the following:

(A) A listing of all registered third-party collecting entities and a search feature that allows members of the public to identify individual third-party collecting entities.

(B) For each registered third-party collecting entity, the information provided under paragraph (2)(B).

(C) (i) A “Do Not Collect” registry link and mechanism by which an individual may, easily submit a request to all registered third-party collecting entities that are not consumer reporting agencies (as defined in section 603(f) of the Fair Credit Reporting Act (15 U.S.C. 1681a(f))), and to the extent such third-party collecting entities are not acting as consumer reporting agencies (as so defined), to—

(I) delete all covered data related to such individual that the third-party collecting entity did not collect from such individual directly or when acting as a service provider; and

(II) ensure that the third-party collecting entity no longer collects covered data related to such individual without the affirmative express consent of such individual, except insofar as the third-party collecting entity is acting as a service provider.

(ii) Each third-party collecting entity that receives such a request from an individual shall delete all the covered data of the individual not later than 30 days after the request is received by the third-party collecting entity.

(iii) Notwithstanding the provisions of clauses (i) and (ii), a third-party collecting entity may decline to fulfill a “Do Not Collect” request from an individual who it has actual knowledge has been convicted of a crime related to the abduction or sexual exploitation of a child, and the data the entity is collecting is necessary to effectuate the purposes of a national or State-run sex offender registry or the congressionally designated entity that serves as the nonprofit national resource center and clearinghouse to provide assistance to victims, families, child-serving professionals, and the general public on missing and exploited children issues.

(c) Penalties.—

(1) IN GENERAL.—A third-party collecting entity that fails to register or provide the notice as required under this section shall be liable for—

(A) a civil penalty of $100 for each day the third-party collecting entity fails to register or provide notice as required under this section, not to exceed a total of $10,000 for any year; and

(B) an amount equal to the registration fees due under paragraph (2)(A) of subsection (b) for each year that the third-party collecting entity failed to register as required under paragraph (1) of such subsection.

(2) RULE OF CONSTRUCTION.—Nothing in this subsection shall be construed as altering, limiting, or affecting any enforcement authorities or remedies under this Act.

SEC. 207. Civil rights and algorithms.

(a) Civil rights protections.—

(1) IN GENERAL.—A covered entity or a service provider may not collect, process, or transfer covered data in a manner that discriminates in or otherwise makes unavailable the equal enjoyment of goods or services on the basis of race, color, religion, national origin, sex, or disability.

(2) EXCEPTIONS.—This subsection shall not apply to—

(A) the collection, processing, or transfer of covered data for the purpose of—

(i) a covered entity’s or a service provider’s self-testing to prevent or mitigate unlawful discrimination; or

(ii) diversifying an applicant, participant, or customer pool; or

(B) any private club or group not open to the public, as described in section 201(e) of the Civil Rights Act of 1964 (42 U.S.C. 2000a(e)).

(b) FTC enforcement assistance.—

(1) IN GENERAL.—Whenever the Commission obtains information that a covered entity or service provider may have collected, processed, or transferred covered data in violation of subsection (a), the Commission shall transmit such information as allowable under Federal law to any Executive agency with authority to initiate enforcement actions or proceedings relating to such violation.

(2) ANNUAL REPORT.—Not later than 3 years after the date of enactment of this Act, and annually thereafter, the Commission shall submit to Congress a report that includes a summary of—

(A) the types of information the Commission transmitted to Executive agencies under paragraph (1) during the previous 1-year period; and

(B) how such information relates to Federal civil rights laws.

(3) TECHNICAL ASSISTANCE.—In transmitting information under paragraph (1), the Commission may consult and coordinate with, and provide technical and investigative assistance, as appropriate, to such Executive agency.

(4) COOPERATION WITH OTHER AGENCIES.—The Commission may implement this subsection by executing agreements or memoranda of understanding with the appropriate Executive agencies.

(c) Covered algorithm impact and evaluation.—

(1) COVERED ALGORITHM IMPACT ASSESSMENT.—

(A) IMPACT ASSESSMENT.—Notwithstanding any other provision of law, not later than 2 years after the date of enactment of this Act, and annually thereafter, a large data holder that uses a covered algorithm in a manner that poses a consequential risk of harm to an individual or group of individuals, and uses such covered algorithm solely or in part, to collect, process, or transfer covered data shall conduct an impact assessment of such algorithm in accordance with subparagraph (B).

(B) IMPACT ASSESSMENT SCOPE.—The impact assessment required under subparagraph (A) shall provide the following:

(i) A detailed description of the design process and methodologies of the covered algorithm.

(ii) A statement of the purpose and proposed uses of the covered algorithm.

(iii) A detailed description of the data used by the covered algorithm, including the specific categories of data that will be processed as input and any data used to train the model that the covered algorithm relies on, if applicable.

(iv) A description of the outputs produced by the covered algorithm.

(v) An assessment of the necessity and proportionality of the covered algorithm in relation to its stated purpose.

(vi) A detailed description of steps the large data holder has taken or will take to mitigate potential harms from the covered algorithm to an individual or group of individuals, including related to—

(I) covered minors;

(II) making or facilitating advertising for, or determining access to, or restrictions on the use of housing, education, employment, healthcare, insurance, or credit opportunities;

(III) determining access to, or restrictions on the use of, any place of public accommodation, particularly as such harms relate to the protected characteristics of individuals, including race, color, religion, national origin, sex, or disability;

(IV) disparate impact on the basis of individuals’ race, color, religion, national origin, sex, or disability status; or

(V) disparate impact on the basis of individuals’ political party registration status.

(2) ALGORITHM DESIGN EVALUATION.—Notwithstanding any other provision of law, not later than 2 years after the date of enactment of this Act, a covered entity or service provider that knowingly develops a covered algorithm that is designed to, solely or in part, to collect, process, or transfer covered data in furtherance of a consequential decision shall prior to deploying the covered algorithm in interstate commerce evaluate the design, structure, and inputs of the covered algorithm, including any training data used to develop the covered algorithm, to reduce the risk of the potential harms identified under paragraph (1)(B).

(3) OTHER CONSIDERATIONS.—

(A) FOCUS.—In complying with paragraphs (1) and (2), a covered entity and a service provider may focus the impact assessment or evaluation on any covered algorithm, or portions of a covered algorithm, that will be put to use and may reasonably contribute to the risk of the potential harms identified under paragraph (1)(B).

(B) AVAILABILITY.—

(i) IN GENERAL.—A covered entity and a service provider—

(I) shall, not later than 30 days after completing an impact assessment or evaluation, submit the impact assessment or evaluation conducted under paragraph (1) or (2) to the Commission;

(II) shall, upon request, make such impact assessment and evaluation available to Congress; and

(III) may make a summary of such impact assessment and evaluation publicly available in a place that is easily accessible to individuals.

(ii) TRADE SECRETS.—Covered entities and service providers may redact and segregate any trade secret (as defined in section 1839 of title 18, United States Code) or other confidential or proprietary information from public disclosure under this subparagraph and the Commission shall abide by its obligations under section 6(f) of the Federal Trade Commission Act (15 U.S.C. 46(f)) in regard to such information.

(C) ENFORCEMENT.—The Commission may not use any information obtained solely and exclusively through a covered entity or a service provider’s disclosure of information to the Commission in compliance with this section for any purpose other than enforcing this Act with the exception of enforcing consent orders, including the study and report provisions in paragraph (6). This subparagraph does not preclude the Commission from providing this information to Congress in response to a subpoena.

(4) GUIDANCE.—Not later than 2 years after the date of enactment of this Act, the Commission shall, in consultation with the Secretary of Commerce, or their respective designees, publish guidance regarding compliance with this section.

(5) RULEMAKING AND EXEMPTION.—The Commission shall have authority under section 553 of title 5, United States Code, to promulgate regulations as necessary to establish processes by which a large data holder—

(A) shall submit an impact assessment to the Commission under paragraph (3)(B)(i)(I); and

(B) may exclude from this subsection any covered algorithm that presents low or minimal consequential risk of harm to an individual or group of individuals.

(6) STUDY AND REPORT.—

(A) STUDY.—The Commission, in consultation with the Secretary of Commerce or the Secretary’s designee, shall conduct a study, to review any impact assessment or evaluation submitted under this subsection. Such study shall include an examination of—

(i) best practices for the assessment and evaluation of covered algorithms; and

(ii) methods to reduce the risk of harm to individuals that may be related to the use of covered algorithms.

(B) REPORT.—

(i) INITIAL REPORT.—Not later than 3 years after the date of enactment of this Act, the Commission, in consultation with the Secretary of Commerce or the Secretary’s designee, shall submit to Congress a report containing the results of the study conducted under subparagraph (A), together with recommendations for such legislation and administrative action as the Commission determines appropriate.

(ii) ADDITIONAL REPORTS.—Not later than 3 years after submission of the initial report under clause (i), and as the Commission determines necessary thereafter, the Commission shall submit to Congress an updated version of such report.

SEC. 208. Data security and protection of covered data.

(a) Establishment of data security practices.—

(1) IN GENERAL.—A covered entity or service provider shall establish, implement, and maintain reasonable administrative, technical, and physical data security practices and procedures to protect and secure covered data against unauthorized access and acquisition.

(2) CONSIDERATIONS.—The reasonable administrative, technical, and physical data security practices required under paragraph (1) shall be appropriate to—

(A) the size and complexity of the covered entity or service provider;

(B) the nature and scope of the covered entity or the service provider’s collecting, processing, or transferring of covered data;

(C) the volume and nature of the covered data collected, processed, or transferred by the covered entity or service provider;

(D) the sensitivity of the covered data collected, processed, or transferred;

(E) the current state of the art (and limitations thereof) in administrative, technical, and physical safeguards for protecting such covered data; and

(F) the cost of available tools to improve security and reduce vulnerabilities to unauthorized access and acquisition of such covered data in relation to the risks and nature of the covered data.

(b) Specific requirements.—The data security practices of the covered entity and of the service provider required under subsection (a) shall include, for each respective entity’s own system or systems, at a minimum, the following practices:

(1) ASSESS VULNERABILITIES.—Identifying and assessing any material internal and external risk to, and vulnerability in, the security of each system maintained by the covered entity that collects, processes, or transfers covered data, or service provider that collects, processes, or transfers covered data on behalf of the covered entity, including unauthorized access to or risks to such covered data, human vulnerabilities, access rights, and the use of service providers. With respect to large data holders, such activities shall include a plan to receive and reasonably respond to unsolicited reports of vulnerabilities by any entity or individual and by performing a reasonable investigation of such reports.

(2) PREVENTIVE AND CORRECTIVE ACTION.—Taking preventive and corrective action designed to mitigate reasonably foreseeable risks or vulnerabilities to covered data identified by the covered entity or service provider, consistent with the nature of such risk or vulnerability and the entity’s role in collecting, processing, or transferring the data. Such action may include implementing administrative, technical, or physical safeguards or changes to data security practices or the architecture, installation, or implementation of network or operating software, among other actions.

(3) EVALUATION OF PREVENTIVE AND CORRECTIVE ACTION.—Evaluating and making reasonable adjustments to the action described in paragraph (2) in light of any material changes in technology, internal or external threats to covered data, and the covered entity or service provider’s own changing business arrangements or operations.

(4) INFORMATION RETENTION AND DISPOSAL.—Disposing of covered data in accordance with a retention schedule that shall require the deletion of covered data when such data is required to be deleted by law or is no longer necessary for the purpose for which the data was collected, processed, or transferred, unless an individual has provided affirmative express consent to such retention. Such disposal shall include destroying, permanently erasing, or otherwise modifying the covered data to make such data permanently unreadable or indecipherable and unrecoverable to ensure ongoing compliance with this section. Service providers shall establish practices to delete or return covered data to a covered entity as requested at the end of the provision of services unless retention of the covered data is required by law, consistent with section 302(a)(6).

(5) TRAINING.—Training each employee with access to covered data on how to safeguard covered data and updating such training as necessary.

(6) DESIGNATION.—Designating an officer, employee, or employees to maintain and implement such practices.

(7) INCIDENT RESPONSE.—Implementing procedures to detect, respond to, or recover from security incidents, including breaches.

(c) Regulations.—The Commission may promulgate, in accordance with section 553 of title 5, United States Code, technology-neutral regulations to establish processes for complying with this section. The Commission shall consult with the National Institute of Standards and Technology in establishing such processes.

SEC. 209. Small business protections.

(a) Establishment of exemption.—Any covered entity or service provider that can establish that it met the requirements described in subsection (b) for the period of the 3 preceding calendar years (or for the period during which the covered entity or service provider has been in existence if such period is less than 3 years) shall—

(1) be exempt from compliance with section 203(a)(4), paragraphs (1) through (3) and (5) through (7) of section 208(b), and section 301(c); and

(2) at the covered entity’s sole discretion, have the option of complying with section 203(a)(2) by, after receiving a verified request from an individual to correct covered data of the individual under such section, deleting such covered data in its entirety instead of making the requested correction.

(b) Exemption requirements.—The requirements of this subsection are, with respect to a covered entity or a service provider, the following:

(1) The covered entity or service provider’s average annual gross revenues during the period did not exceed $41,000,000.

(2) The covered entity or service provider, on average, did not annually collect or process the covered data of more than 200,000 individuals during the period beyond the purpose of initiating, rendering, billing for, finalizing, completing, or otherwise collecting payment for a requested service or product, so long as all covered data for such purpose was deleted or de-identified within 90 days, except when necessary to investigate fraud or as consistent with a covered entity’s return policy.

(3) The covered entity or service provider did not derive more than 50 percent of its revenue from transferring covered data during any year (or part of a year if the covered entity has been in existence for less than 1 year) that occurs during the period.

(c) Revenue defined.—For purposes of this section, the term “revenue” as it relates to any covered entity or service provider that is not organized to carry on business for its own profit or that of its members, means the gross receipts the covered entity or service provider received in whatever form from all sources without subtracting any costs or expenses, and includes contributions, gifts, grants, dues or other assessments, income from investments, or proceeds from the sale of real or personal property.

SEC. 210. Unified opt-out mechanisms.

(a) In general.—For the rights established under subsection (b) of section 204, subsection (c) of section 204 (except as provided for under section 101(b)(16)), and section 206(b)(3)(C), following public notice and opportunity to comment and not later than 18 months after the date of enactment of this Act, the Commission shall establish or recognize one or more acceptable privacy protective, centralized mechanisms, including global privacy signals such as browser or device privacy settings, other tools offered by covered entities or service providers, and registries of identifiers, for individuals to exercise all such rights through a single interface for a covered entity or service provider to utilize to allow an individual to make such opt out designations with respect to covered data related to such individual.

(b) Requirements.—Any such centralized opt-out mechanism shall—

(1) require covered entities or service providers acting on behalf of covered entities to inform individuals about the centralized opt-out choice;

(2) not be required to be the default setting, but may be the default setting provided that in all cases the mechanism clearly represents the individual’s affirmative, freely given, and unambiguous choice to opt out;

(3) be consumer-friendly, clearly described, and easy-to-use by a reasonable individual;

(4) permit the covered entity or service provider acting on behalf of a covered entity to have an authentication process the covered entity or service provider acting on behalf of a covered entity may use to determine if the mechanism represents a legitimate request to opt out;

(5) be provided in any covered language in which the covered entity provides products or services subject to the opt-out; and

(6) be provided in a manner that is reasonably accessible to and usable by individuals with disabilities.

TITLE III—Corporate Accountability

SEC. 301. Executive responsibility.

(a) In general.—Beginning 1 year after the date of enactment of this Act, an executive officer of a large data holder shall annually certify, in good faith, to the Commission, in a manner specified by the Commission by regulation under section 553 of title 5, United States Code, that the entity maintains—

(1) internal controls reasonably designed to comply with this Act; and

(2) internal reporting structures to ensure that such certifying executive officer is involved in and responsible for the decisions that impact the compliance by the large data holder with this Act.

(b) Requirements.—A certification submitted under subsection (a) shall be based on a review of the effectiveness of the internal controls and reporting structures of the large data holder that is conducted by the certifying executive officer not more than 90 days before the submission of the certification. A certification submitted under subsection (a) is made in good faith if the certifying officer had, after a reasonable investigation, reasonable ground to believe and did believe, at the time that certification was submitted, that the statements therein were true and that there was no omission to state a material fact required to be stated therein or necessary to make the statements therein not misleading.

(c) Designation of privacy and data security officer.—

(1) IN GENERAL.—A covered entity or service provider that have more than 15 employees, shall designate—

(A) 1 or more qualified employees as privacy officers; and

(B) 1 or more qualified employees (in addition to any employee designated under subparagraph (A)) as data security officers.

(2) REQUIREMENTS FOR OFFICERS.—An employee who is designated by a covered entity or a service provider as a privacy officer or a data security officer pursuant to paragraph (1) shall, at a minimum—

(A) implement a data privacy program and data security program to safeguard the privacy and security of covered data in compliance with the requirements of this Act; and

(B) facilitate the covered entity or service provider’s ongoing compliance with this Act.

(3) ADDITIONAL REQUIREMENTS FOR LARGE DATA HOLDERS.—A large data holder shall designate at least 1 of the officers described in paragraph (1) to report directly to the highest official at the large data holder as a privacy protection officer who shall, in addition to the requirements in paragraph (2), either directly or through a supervised designee or designees—

(A) establish processes to periodically review and update the privacy and security policies, practices, and procedures of the large data holder, as necessary;

(B) conduct biennial and comprehensive audits to ensure the policies, practices, and procedures of the large data holder ensure the large data holder is in compliance with this Act and ensure such audits are accessible to the Commission upon request;

(C) develop a program to educate and train employees about compliance requirements of this Act;

(D) maintain updated, accurate, clear, and understandable records of all material privacy and data security practices undertaken by the large data holder; and

(E) serve as the point of contact between the large data holder and enforcement authorities.

(d) Large data holder privacy impact assessments.—

(1) IN GENERAL.—Not later than 1 year after the date of enactment of this Act or 1 year after the date on which a covered entity first meets the definition of large data holder, whichever is earlier, and biennially thereafter, each covered entity that is a large data holder shall conduct a privacy impact assessment that weighs the benefits of the large data holder’s covered data collecting, processing, and transfer practices against the potential adverse consequences of such practices, including substantial privacy risks, to individual privacy.

(2) ASSESSMENT REQUIREMENTS.—A privacy impact assessment required under paragraph (1) shall be—

(A) reasonable and appropriate in scope given—

(i) the nature of the covered data collected, processed, and transferred by the large data holder;

(ii) the volume of the covered data collected, processed, and transferred by the large data holder; and

(iii) the potential material risks posed to the privacy of individuals by the collecting, processing, and transfer of covered data by the large data holder;

(B) documented in written form and maintained by the large data holder unless rendered out of date by a subsequent assessment conducted under paragraph (1); and

(C) approved by the privacy protection officer designated in subsection (c)(3) of the large data holder, as applicable.

(3) ADDITIONAL FACTORS TO INCLUDE IN ASSESSMENT.—In assessing the privacy risks, including substantial privacy risks, the large data holder must include reviews of the means by which technologies, including blockchain and distributed ledger technologies and other emerging technologies, are used to secure covered data.

(e) Other privacy impact assessments.—

(1) IN GENERAL.—Not later than 1 year after the date of enactment of this Act and biennially thereafter, each covered entity that is not large data holder and does not meet the requirements for covered entities under section 209 shall conduct a privacy impact assessment. Such assessment shall weigh the benefits of the covered entity’s covered data collecting, processing, and transfer practices that may cause a substantial privacy risk against the potential material adverse consequences of such practices to individual privacy.

(2) ASSESSMENT REQUIREMENTS.—A privacy impact assessment required under paragraph (1) shall be—

(A) reasonable and appropriate in scope given—

(i) the nature of the covered data collected, processed, and transferred by the covered entity;

(ii) the volume of the covered data collected, processed, and transferred by the covered entity; and

(iii) the potential risks posed to the privacy of individuals by the collecting, processing, and transfer of covered data by the covered entity; and

(B) documented in written form and maintained by the covered entity unless rendered out of date by a subsequent assessment conducted under paragraph (1).

(3) ADDITIONAL FACTORS TO INCLUDE IN ASSESSMENT.—In assessing the privacy risks, including substantial privacy risks, the covered entity may include reviews of the means by which technologies, including blockchain and distributed ledger technologies and other emerging technologies, are used to secure covered data.

SEC. 302. Service providers and third parties.

(a) Service providers.—A service provider—

(1) shall adhere to the instructions of a covered entity and only collect, process, and transfer service provider data to the extent necessary and proportionate to provide a service requested by the covered entity, as set out in the contract required by subsection (b), and this paragraph does not require a service provider to collect, process, or transfer covered data if the service provider would not otherwise do so;

(2) may not collect, process, or transfer service provider data if the service provider has actual knowledge that a covered entity violated this Act with respect to such data;

(3) shall assist a covered entity in responding to a request made by an individual under section 203 or 204, by either—

(A) providing appropriate technical and organizational measures, taking into account the nature of the processing and the information reasonably available to the service provider, for the covered entity to comply with such request for service provider data; or

(B) fulfilling a request by a covered entity to execute an individual rights request that the covered entity has determined should be complied with, by either—

(i) complying with the request pursuant to the covered entity’s instructions; or

(ii) providing written verification to the covered entity that it does not hold covered data related to the request, that complying with the request would be inconsistent with its legal obligations, or that the request falls within an exception to section 203 or 204;

(4) may engage another service provider for purposes of processing service provider data on behalf of a covered entity only after providing that covered entity with notice and pursuant to a written contract that requires such other service provider to satisfy the obligations of the service provider with respect to such service provider data, including that the other service provider be treated as a service provider under this Act;

(5) shall, upon the reasonable request of the covered entity, make available to the covered entity information necessary to demonstrate the compliance of the service provider with the requirements of this Act, which may include making available a report of an independent assessment arranged by the service provider on terms agreed to by the service provider and the covered entity, providing information necessary to enable the covered entity to conduct and document a privacy impact assessment required by subsection (d) or (e) of section 301, and making available the report required under section 207(c)(2);

(6) shall, at the covered entity’s direction, delete or return all covered data to the covered entity as requested at the end of the provision of services, unless retention of the covered data is required by law;

(7) shall develop, implement, and maintain reasonable administrative, technical, and physical safeguards that are designed to protect the security and confidentiality of covered data the service provider processes consistent with section 208; and

(8) shall allow and cooperate with, reasonable assessments by the covered entity or the covered entity’s designated assessor; alternatively, the service provider may arrange for a qualified and independent assessor to conduct an assessment of the service provider’s policies and technical and organizational measures in support of the obligations under this Act using an appropriate and accepted control standard or framework and assessment procedure for such assessments. The service provider shall provide a report of such assessment to the covered entity upon request.

(b) Contracts Between Covered Entities and Service Providers.—

(1) REQUIREMENTS.—A person or entity may only act as a service provider pursuant to a written contract between the covered entity and the service provider, or a written contract between one service provider and a second service provider as described under subsection (a)(4), if the contract—

(A) sets forth the data processing procedures of the service provider with respect to collection, processing, or transfer performed on behalf of the covered entity or service provider;

(B) clearly sets forth—

(i) instructions for collecting, processing, or transferring data;

(ii) the nature and purpose of collecting, processing, or transferring;

(iii) the type of data subject to collecting, processing, or transferring;

(iv) the duration of processing; and

(v) the rights and obligations of both parties, including a method by which the service provider shall notify the covered entity of material changes to its privacy practices;

(C) does not relieve a covered entity or a service provider of any requirement or liability imposed on such covered entity or service provider under this Act; and

(D) prohibits—

(i) collecting, processing, or transferring covered data in contravention to subsection (a); and

(ii) combining service provider data with covered data which the service provider receives from or on behalf of another person or persons or collects from the interaction of the service provider with an individual, provided that such combining is not necessary to effectuate a purpose described in paragraphs (1) through (15) of section 101(b) and is otherwise permitted under the contract required by this subsection.

(2) CONTRACT TERMS.—Each service provider shall retain copies of previous contracts entered into in compliance with this subsection with each covered entity to which it provides requested products or services.

(c) Relationship Between Covered Entities and Service Providers.—

(1) Determining whether a person is acting as a covered entity or service provider with respect to a specific processing of covered data is a fact-based determination that depends upon the context in which such data is processed.

(2) A person that is not limited in its processing of covered data pursuant to the instructions of a covered entity, or that fails to adhere to such instructions, is a covered entity and not a service provider with respect to a specific processing of covered data. A service provider that continues to adhere to the instructions of a covered entity with respect to a specific processing of covered data remains a service provider. If a service provider begins, alone or jointly with others, determining the purposes and means of the processing of covered data, it is a covered entity and not a service provider with respect to the processing of such data.

(3) A covered entity that transfers covered data to a service provider or a service provider that transfers covered data to a covered entity or another service provider, in compliance with the requirements of this Act, is not liable for a violation of this Act by the service provider or covered entity to whom such covered data was transferred, if at the time of transferring such covered data, the covered entity or service provider did not have actual knowledge that the service provider or covered entity would violate this Act.

(4) A covered entity or service provider that receives covered data in compliance with the requirements of this Act is not in violation of this Act as a result of a violation by a covered entity or service provider from which such data was received.

(d) Third parties.—A third party—

(1) shall not process third party data for a processing purpose other than, in the case of sensitive covered data, the processing purpose for which the individual gave affirmative express consent or to effect a purpose enumerated in paragraph (1), (3), or (5) of section 101(b) and, in the case of non-sensitive data, the processing purpose for which the covered entity made a disclosure pursuant to section 202(b)(4); and

(2) for purposes of paragraph (1), may reasonably rely on representations made by the covered entity that transferred the third party data if the third party conducts reasonable due diligence on the representations of the covered entity and finds those representations to be credible.

(e) Additional obligations on covered entities.—

(1) IN GENERAL.—A covered entity or service provider shall exercise reasonable due diligence in—

(A) selecting a service provider; and

(B) deciding to transfer covered data to a third party.

(2) GUIDANCE.—Not later than 2 years after the date of enactment of this Act, the Commission shall publish guidance regarding compliance with this subsection, taking into consideration the burdens on large data holders, covered entities who are not large data holders, and covered entities meeting the requirements of section 209.

(f) Rule of construction.—Solely for the purposes of this section, the requirements for service providers to contract with, assist, and follow the instructions of covered entities shall be read to include requirements to contract with, assist, and follow the instructions of a government entity if the service provider is providing a service to a government entity.

SEC. 303. Technical compliance programs.

(a) In general.—Not later than 3 years after the date of enactment of this Act, the Commission shall promulgate regulations under section 553 of title 5, United States Code, to establish a process for the proposal and approval of technical compliance programs under this section used by a covered entity to collect, process, or transfer covered data.

(b) Scope of programs.—The technical compliance programs established under this section shall, with respect to a technology, product, service, or method used by a covered entity to collect, process, or transfer covered data—

(1) establish publicly available guidelines for compliance with this Act; and

(2) meet or exceed the requirements of this Act.

(c) Approval process.—

(1) IN GENERAL.—Any request for approval, amendment, or repeal of a technical compliance program may be submitted to the Commission by any person, including a covered entity, a representative of a covered entity, an association of covered entities, or a public interest group or organization. Within 90 days after the request is made, the Commission shall publish the request and provide an opportunity for public comment on the proposal.

(2) EXPEDITED RESPONSE TO REQUESTS.—Beginning 1 year after the date of enactment of this Act, the Commission shall act upon a request for the proposal and approval of a technical compliance program not later than 1 year after the filing of the request, and shall set forth publicly in writing the conclusions of the Commission with regard to such request.

(d) Right to Appeal.—Final action by the Commission on a request for approval, amendment, or repeal of a technical compliance program, or the failure to act within the 1-year period after a request for approval, amendment, or repeal of a technical compliance program is made under subsection (c), may be appealed to a Federal district court of the United States of appropriate jurisdiction as provided for in section 702 of title 5, United States Code.

(e) Effect on enforcement.—

(1) IN GENERAL.—Prior to commencing an investigation or enforcement action against any covered entity under this Act, the Commission and State attorney general shall consider the covered entity’s history of compliance with any technical compliance program approved under this section and any action taken by the covered entity to remedy noncompliance with such program. If such enforcement action described in section 403 is brought, the covered entity’s history of compliance with any technical compliance program approved under this section and any action taken by the covered entity to remedy noncompliance with such program shall be taken into consideration when determining liability or a penalty. The covered entity’s history of compliance with any technical compliance program shall not affect any burden of proof or the weight given to evidence in an enforcement or judicial proceeding.

(2) COMMISSION AUTHORITY.—Approval of a technical compliance program shall not limit the authority of the Commission, including the Commission’s authority to commence an investigation or enforcement action against any covered entity under this Act or any other Act.

(3) RULE OF CONSTRUCTION.—Nothing in this subsection shall provide any individual, class of individuals, or person with any right to seek discovery of any non-public Commission deliberation or activity or impose any pleading requirement on the Commission if the Commission brings an enforcement action of any kind.

SEC. 304. Commission approved compliance guidelines.

(a) Application for compliance guideline Approval.—

(1) IN GENERAL.—A covered entity that is not a third-party collecting entity and meets the requirements of section 209, or a group of such covered entities, may apply to the Commission for approval of 1 or more sets of compliance guidelines governing the collection, processing, and transfer of covered data by the covered entity or group of covered entities.

(2) APPLICATION REQUIREMENTS.—Such application shall include—

(A) a description of how the proposed guidelines will meet or exceed the requirements of this Act;

(B) a description of the entities or activities the proposed set of compliance guidelines is designed to cover;

(C) a list of the covered entities that meet the requirements of section 209 and are not third-party collecting entities, if any are known at the time of application, that intend to adhere to the compliance guidelines; and

(D) a description of how such covered entities will be independently assessed for adherence to such compliance guidelines, including the independent organization not associated with any of the covered entities that may participate in guidelines that will administer such guidelines.

(3) COMMISSION REVIEW.—

(A) INITIAL APPROVAL.—

(i) PUBLIC COMMENT PERIOD.—Within 90 days after the receipt of proposed guidelines submitted pursuant to paragraph (2), the Commission shall publish the application and provide an opportunity for public comment on such compliance guidelines.

(ii) APPROVAL.—The Commission shall approve an application regarding proposed guidelines under paragraph (2) if the applicant demonstrates that the compliance guidelines—

(I) meet or exceed requirements of this Act;

(II) provide for the regular review and validation by an independent organization not associated with any of the covered entities that may participate in the guidelines and that is approved by the Commission to conduct such reviews of the compliance guidelines of the covered entity or entities to ensure that the covered entity or entities continue to meet or exceed the requirements of this Act; and

(III) include a means of enforcement if a covered entity does not meet or exceed the requirements in the guidelines, which may include referral to the Commission for enforcement consistent with section 401 or referral to the appropriate State attorney general for enforcement consistent with section 402.

(iii) TIMELINE.—Within 1 year after receiving an application regarding proposed guidelines under paragraph (2), the Commission shall issue a determination approving or denying the application and providing its reasons for approving or denying such application.

(B) APPROVAL OF MODIFICATIONS.—

(i) IN GENERAL.—If the independent organization administering a set of guidelines makes material changes to guidelines previously approved by the Commission, the independent organization shall submit the updated guidelines to the Commission for approval. As soon as feasible, the Commission shall publish the updated guidelines and provide an opportunity for public comment.

(ii) TIMELINE.—The Commission shall approve or deny any material change to the guidelines within 1 year after receipt of the submission for approval.

(b) Withdrawal of Approval.—If at any time the Commission determines that the guidelines previously approved no longer meet the requirements of this Act or a regulation promulgated under this Act or that compliance with the approved guidelines is insufficiently enforced by the independent organization administering the guidelines, the Commission shall notify the covered entities or group of such entities and the independent organization of the determination of the Commission to withdraw approval of such guidelines and the basis for doing so. Within180 days after receipt of such notice, the covered entity or group of such entities and the independent organization may cure any alleged deficiency with the guidelines or the enforcement of such guidelines and submit each proposed cure to the Commission. If the Commission determines that such cures eliminate the alleged deficiency in the guidelines, then the Commission may not withdraw approval of such guidelines on the basis of such determination.

(c) Deemed compliance.—A covered entity that is eligible to participate under subsection (a)(1) and participates in guidelines approved under this section shall be deemed in compliance with the relevant provisions of this Act if such covered entity is in compliance with such guidelines.

SEC. 305. Digital content forgeries.

(a) Reports.—Not later than 1 year after the date of enactment of this Act, and annually thereafter, the Secretary of Commerce or the Secretary’s designee shall publish a report regarding digital content forgeries.

(b) Requirements.—Each report under subsection (a) shall include the following:

(1) A definition of digital content forgeries along with accompanying explanatory materials.

(2) A description of the common sources of digital content forgeries in the United States and commercial sources of digital content forgery technologies.

(3) An assessment of the uses, applications, and harms of digital content forgeries.

(4) An analysis of the methods and standards available to identify digital content forgeries as well as a description of the commercial technological counter-measures that are, or could be, used to address concerns with digital content forgeries, which may include the provision of warnings to viewers of suspect content.

(5) A description of the types of digital content forgeries, including those used to commit fraud, cause harm, or violate any provision of law.

(6) Any other information determined appropriate by the Secretary of Commerce or the Secretary’s designee.

TITLE IV—Enforcement, Applicability, and Miscellaneous

SEC. 401. Enforcement by the Federal Trade Commission.

(a) Bureau of Privacy.—

(1) IN GENERAL.—The Commission shall establish within the Commission a new bureau to be known as the “Bureau of Privacy”, which shall be of similar structure, size, organization, and authority as the existing bureaus within the Commission related to consumer protection and competition.

(2) MISSION.—The mission of the Bureau established under paragraph (1) shall be to assist the Commission in carrying out the duties of the Commission under this Act and related duties under other provisions of law.

(3) TIMELINE.—The Bureau required to be established under paragraph (1) shall be established, staffed, and fully operational not later than 1 year after the date of enactment of this Act.

(b) Office of Business Mentorship.—The Director of the Bureau established under subsection (a)(1) shall establish within the Bureau an office to be known as the “Office of Business Mentorship” to provide guidance and education to covered entities and service providers regarding compliance with this Act. Covered entities or service providers may request advice from the Commission or the Office with respect to a course of action that the covered entity or service provider proposes to pursue and that may relate to the requirements of this Act.

(c) Enforcement by the Federal Trade Commission.—

(1) UNFAIR OR DECEPTIVE ACTS OR PRACTICES.—A violation of this Act or a regulation promulgated under this Act shall be treated as a violation of a rule defining an unfair or deceptive act or practice prescribed under section 18(a)(1)(B) of the Federal Trade Commission Act (15 U.S.C. 57a(a)(1)(B)).

(2) POWERS OF THE COMMISSION.—

(A) IN GENERAL.—Except as provided in paragraphs (3), (4), and (5), the Commission shall enforce this Act and the regulations promulgated under this Act in the same manner, by the same means, and with the same jurisdiction, powers, and duties as though all applicable terms and provisions of the Federal Trade Commission Act (15 U.S.C. 41 et seq.) were incorporated into and made a part of this Act.

(B) PRIVILEGES AND IMMUNITIES.—Any person who violates this Act or a regulation promulgated under this Act shall be subject to the penalties and entitled to the privileges and immunities provided in the Federal Trade Commission Act (15 U.S.C. 41 et seq.).

(3) LIMITING CERTAIN ACTIONS UNRELATED TO THIS ACT.—If the Commission brings a civil action alleging that an act or practice violates this Act or a regulation promulgated under this Act, the Commission may not seek a cease and desist order against the same defendant under section 5(b) of the Federal Trade Commission Act (15 U.S.C. 45(b)) to stop that same act or practice on the grounds that such act or practice constitutes an unfair or deceptive act or practice.

(4) COMMON CARRIERS AND NONPROFIT ORGANIZATIONS.—Notwithstanding any jurisdictional limitation of the Commission with respect to consumer protection or privacy, the Commission shall enforce this Act and the regulations promulgated under this Act, in the same manner provided in paragraphs (1), (2), (3), and (5), with respect to common carriers subject to the Communications Act of 1934 (47 U.S.C. 151 et seq.) and all Acts amendatory thereof and supplementary thereto and organizations not organized to carry on business for their own profit or that of their members.

(5) PRIVACY AND SECURITY VICTIMS RELIEF FUND.—

(A) ESTABLISHMENT.—There is established in the Treasury of the United States a separate fund to be known as the “Privacy and Security Victims Relief Fund” in this paragraph referred to as the “Victims Relief Fund”).

(B) DEPOSITS.—Notwithstanding section 3302 of title 31, United States Code, in any judicial or administrative action to enforce this Act or a regulation promulgated under this Act, the amount of any civil penalty obtained against a covered entity or service provider, or any other monetary relief ordered to be paid by a covered entity or service provider to provide redress, payment, compensation, or other relief to individuals that cannot be located or the payment of which would otherwise not be practicable, shall be deposited into the Victims Relief Fund.

(C) USE OF FUNDS.—

(i) USE BY COMMISSION.—Amounts in the Victims Relief Fund shall be available to the Commission, without fiscal year limitation, to provide redress, payment, compensation, or other monetary relief to individuals affected by an act or practice for which relief has been obtained under this Act.

(ii) OTHER PERMISSIBLE USES.—To the extent that the individuals described in clause (i) cannot be located or such redress, payments, compensation, or other monetary relief are otherwise not practicable, the Commission may use such funds for the purpose of—

(I) funding the activities of the Office of Business Mentorship established under subsection (b); or

(II) engaging in technological research that the Commission considers necessary to enforce or administer this Act.

SEC. 402. Enforcement by States.

(a) Civil action.—In any case in which the attorney general or State Privacy Authority of a State has reason to believe that an interest of the residents of that State has been, may be, or is adversely affected by a violation of this Act or a regulation promulgated under this Act by a covered entity or service provider, the attorney general or State Privacy Authority may bring a civil action in the name of the State, or as parens patriae on behalf of the residents of the State. Any such action shall be brought exclusively in an appropriate Federal district court of the United States to—

(1) enjoin such act or practice;

(2) enforce compliance with this Act or such regulation;

(3) obtain damages, civil penalties, restitution, or other compensation on behalf of the residents of such State; or

(4) obtain reasonable attorneys’ fees and other litigation costs reasonably incurred.

(b) Rights of the Commission.—

(1) IN GENERAL.—Except as provided in paragraph (2), the attorney general or State Privacy Authority of a State shall notify the Commission in writing prior to initiating a civil action under subsection (a). Such notification shall include a copy of the complaint to be filed to initiate such action. Upon receiving such notification, the Commission may intervene in such action as a matter of right pursuant to the Federal Rules of Civil Procedure.

(2) FEASIBILITY.—If the notification required by paragraph (1) is not feasible, the attorney general or State Privacy Authority shall notify the Commission immediately after initiating the civil action.

(c) Actions by the Commission.—In any case in which a civil action is instituted by or on behalf of the Commission for violation of this Act or a regulation promulgated under this Act, no attorney general or State Privacy Authority of a State may, during the pendency of such action, institute a civil action against any defendant named in the complaint in the action instituted by or on behalf of the Commission for a violation of this Act or a regulation promulgated under this Act that is alleged in such complaint, if such complaint alleges such violation affected the residents of such State or individuals nationwide. If the Commission brings a civil action against a covered entity or service provider for a violation of this Act or a regulation promulgated under this Act that affects the interests of the residents of a State, the attorney general or State Privacy Authority of such State may intervene in such action as a matter of right pursuant to the Federal Rules of Civil Procedure.

(d) Rule of construction.—Nothing in this section may be construed to prevent the attorney general or State Privacy Authority of a State from exercising the powers conferred on the attorney general or State Privacy Authority to conduct investigations, to administer oaths or affirmations, or to compel the attendance of witnesses or the production of documentary or other evidence.

(e) Preservation of state powers.—Except as provided in subsection (c), nothing in this section may be construed as altering, limiting, or affecting the authority of the attorney general or State Privacy Authority of a State to—

(1) bring an action or other regulatory proceeding arising solely under the law in effect in the State that is preempted by this Act or under another applicable Federal law; or

(2) exercise the powers conferred on the attorney general or State Privacy Authority by the laws of the State, including the ability to conduct investigations, administer oaths or affirmations, or compel the attendance of witnesses or the production of documentary or other evidence.

SEC. 403. Enforcement by persons.

(a) Enforcement by persons.—

(1) IN GENERAL.—Beginning on the date that is 2 years after the date on which this Act takes effect, any person or class of persons for a violation of this Act or a regulation promulgated under this Act by a covered entity or service provider may bring a civil action against such entity in any Federal court of competent jurisdiction.

(2) RELIEF.—In a civil action brought under paragraph (1) in which a plaintiff prevails, the court may award the plaintiff—

(A) an amount equal to the sum of any compensatory damages;

(B) injunctive relief;

(C) declaratory relief; and

(D) reasonable attorney’s fees and litigation costs.

(3) RIGHTS OF THE COMMISSION AND STATE ATTORNEYS GENERAL.—

(A) IN GENERAL.—Prior to a person bringing a civil action under paragraph (1), such person shall notify the Commission and the attorney general of the State where such person resides in writing that such person intends to bring a civil action under such paragraph. Upon receiving such notice, the Commission and State attorney general shall each or jointly make a determination and respond to such person not later than 60 days after receiving such notice, as to whether they will intervene in such action pursuant to the Federal Rules of Civil Procedure. If a state attorney general does intervene, they shall only be heard with respect to the interests of the residents of their State

(B) RETAINED AUTHORITY.—Subparagraph (A) may not be construed to limit the authority of the Commission or any applicable State attorney general or State Privacy Authority to later commence a proceeding or civil action or intervene by motion if the Commission or State attorney general or State Privacy Authority does not commence a proceeding or civil action within the 60-day period.

(C) BAD FAITH.—Any written communication from counsel for an aggrieved party to a covered entity or service provider requesting a monetary payment from that covered entity or service provider regarding a specific claim described in a letter sent pursuant to subsection (d), not including filings in court proceedings, arbitrations, mediations, judgment collection processes, or other communications related to previously initiated litigation or arbitrations, shall be considered to have been sent in bad faith and shall be unlawful as defined in this Act, if the written communication was sent prior to the date that is 60 days after either a State attorney general or the Commission has received the notice required under subparagraph (A).

(4) FTC STUDY.—Beginning on the date that is 5 years after the date of enactment of this Act and every 5 years thereafter, the Commission’s Bureau of Economics and Bureau of Privacy shall assist the Commission in conducting a study to determine the economic impacts in the United States of demand letters sent pursuant to this section and the scope of the rights of a person under this section to bring forth civil actions against covered entities and service providers. Such study shall include the following:

(A) The impact on insurance rates in the United States.

(B) The impact on the ability of covered entities to offer new products or services.

(C) The impact on the creation and growth of new startup companies, including new technology companies.

(D) Any emerging risks, benefits, and long-term trends in relevant marketplaces, supply chains, and labor availability.

(E) The impact on reducing, preventing, or remediating harms to individuals, including from fraud, identity theft, spam, discrimination, defective products, and violations of rights.

(F) The impact on the volume and severity of data security incidents, and the ability to respond to data security incidents.

(G) Other intangible direct and indirect costs and benefits to individuals.

(5) REPORT TO CONGRESS.—Not later than 5 years after the first day on which persons and classes of persons are able to bring civil actions under this subsection, and annually thereafter, the Commission shall submit to the Committee on Energy and Commerce of the House of Representatives and the Committee on Commerce, Science, and Transportation of the Senate a report that contains the results of the study conducted under paragraph (4).

(b) Arbitration agreements and pre-dispute joint action waivers.—

(1) PRE-DISPUTE ARBITRATION AGREEMENTS.—

(A) Notwithstanding any other provision of law, no pre-dispute arbitration agreement with respect to an individual under the age of 18 is enforceable with regard to a dispute arising under this Act.

(B) Notwithstanding any other provision of law, no pre-dispute arbitration agreement is enforceable with regard to a dispute arising under this Act concerning a claim related to gender or partner-based violence or physical harm.

(2) PRE-DISPUTE JOINT-ACTION WAIVERS.—Notwithstanding any other provision of law, no pre-dispute joint-action waiver with respect to an individual under the age of 18 is enforceable with regard to a dispute arising under this Act.

(3) DEFINITIONS.—For purposes of this subsection:

(A) PRE-DISPUTE ARBITRATION AGREEMENT.—The term “pre-dispute arbitration agreement” means any agreement to arbitrate a dispute that has not arisen at the time of the making of the agreement.

(B) PRE-DISPUTE JOINT-ACTION WAIVER.—The term “pre-dispute joint-action waiver” means an agreement, whether or not part of a pre-dispute arbitration agreement, that would prohibit or waive the right of 1 of the parties to the agreement to participate in a joint, class, or collective action in a judicial, arbitral, administrative, or other related forum, concerning a dispute that has not yet arisen at the time of the making of the agreement.

(c) Right to cure.—

(1) NOTICE.—Subject to paragraph (3), with respect to a claim under this section for—

(A) injunctive relief; or

(B) an action against a covered entity or service provider that meets the requirements of section 209 of this Act, such claim may be brought by a person or class of persons if—prior to asserting such claim—the person or class or persons provides to the covered entity or service provider 45 days’ written notice identifying the specific provisions of this Act the person or class of persons alleges have been or are being violated.

(2) EFFECT OF CURE.—Subject to paragraph (3), in the event a cure is possible, if within the 45 days the covered entity or service provider demonstrates to the court that it has cured the noticed violation or violations and provides the person or class of persons an express written statement that the violation or violations has been cured and that no further violations shall occur, a claim for injunctive relief shall not be permitted and may be reasonably dismissed.

(3) RULE OF CONSTRUCTION.—The notice described in paragraph (1) and the reasonable dismissal in paragraph (2) shall not apply more than once to any alleged underlying violation by the same covered entity.

(d) Demand letter.—If a person or a identified members of a class of persons represented by counsel in regard to an alleged violation or violations of the Act and has correspondence sent to a covered entity or service provider by counsel alleging a violation or violations of the provisions of this Act and requests a monetary payment, such correspondence shall include the following language: “Please visit the website of the Federal Trade Commission for a general description of your rights under the American Data Privacy and Protection Act” followed by a hyperlink to the webpage of the Commission required under section 201. If such correspondence does not include such language and hyperlink, a civil action brought under this section by such person or identified members of the class of persons represented by counsel may be dismissed without prejudice and shall not be reinstated until such person or persons has complied with this subsection.

(e) Applicability.—

(1) IN GENERAL.—This section shall only apply to a claim alleging a violation of section 102, 104, 202, 203, 204, 205(a), 205(b), 206(b)(3)(C), 207(a), 208(a), or 302, or a regulation promulgated under any such section.

(2) EXCEPTION.—This section shall not apply to any claim against a covered entity that has less than $25,000,000 per year in revenue, collects, processes, or transfers the covered data of fewer than 50,000 individuals, and derives less than 50 percent of its revenue from transferring covered data.

SEC. 404. Relationship to Federal and State laws.

(a) Federal law preservation.—

(1) IN GENERAL.—Nothing in this Act or a regulation promulgated under this Act may be construed to limit—

(A) the authority of the Commission, or any other Executive agency, under any other provision of law;

(B) any requirement for a common carrier subject to section 64.2011 of title 47, Code of Federal Regulations (or any successor regulation) regarding information security breaches; or

(C) any other provision of Federal law, except as otherwise provided in this Act.

(2) ANTITRUST SAVINGS CLAUSE.—

(A) FULL APPLICATION OF THE ANTITRUST LAW.—Nothing in this Act may be construed to modify, impair or supersede the operation of the antitrust law or any other provision of law.

(B) NO IMMUNITY FROM THE ANTITRUST LAW.—Nothing in the regulatory regime adopted by this Act shall be construed as operating to limit any law deterring anticompetitive conduct or diminishing the need for full application of the antitrust law. Nothing in this Act explicitly or implicitly precludes the application of the antitrust law.

(C) DEFINITION OF ANTITRUST LAW.—For purposes of this section, the term antitrust law has the same meaning as in subsection (a) of the first section of the Clayton Act (15 U.S.C. 12), except that such term includes section 5 of the Federal Trade Commission Act (15 U.S.C. 45) to the extent that such section 5 applies to unfair methods of competition.

(3) APPLICABILITY OF OTHER PRIVACY REQUIREMENTS.—A covered entity that is required to comply with title V of the Gramm-Leach-Bliley Act (15 U.S.C. 6801 et seq.), the Health Information Technology for Economic and Clinical Health Act (42 U.S.C. 17931 et seq.), part C of title XI of the Social Security Act (42 U.S.C. 1320d et seq.), the Fair Credit Reporting Act (15 U.S.C. 1681 et seq.), the Family Educational Rights and Privacy Act (20 U.S.C. 1232g; part 99 of title 34, Code of Federal Regulations) to the extent such covered entity is a school as defined in 20 U.S.C. 1232g(a)(3) or 34 C.F.R. 99.1(a), section 444 of the General Education Provisions Act (commonly known as the “Family Educational Rights and Privacy Act of 1974”) (20 U.S.C. 1232g) and part 99 of title 34, Code of Federal Regulations (or any successor regulation), the Confidentiality of Alcohol and Drug Abuse Patient Records at 42 U.S.C. 290dd-2 and its implementing regulations at 42 CFR part 2, the Genetic Information Non-discrimination Act (GINA), or the regulations promulgated pursuant to section 264(c) of the Health Insurance Portability and Accountability Act of 1996 (42 U.S.C. 1320d–2 note), and is in compliance with the data privacy requirements of such regulations, part, title, or Act (as applicable), shall be deemed to be in compliance with the related requirements of this Act, except for section 208, solely and exclusively with respect to data subject to the requirements of such regulations, part, title, or Act. Not later than 1 year after the date of enactment of this Act, the Commission shall issue guidance describing the implementation of this paragraph.

(4) APPLICABILITY OF OTHER DATA SECURITY REQUIREMENTS.—A covered entity that is required to comply with title V of the Gramm-Leach-Bliley Act (15 U.S.C. 6801 et seq.), the Health Information Technology for Economic and Clinical Health Act (42 U.S.C. 17931 et seq.), part C of title XI of the Social Security Act (42 U.S.C. 1320d et seq.), or the regulations promulgated pursuant to section 264(c) of the Health Insurance Portability and Accountability Act of 1996 (42 U.S.C. 1320d–2 note), and is in compliance with the information security requirements of such regulations, part, title, or Act (as applicable), shall be deemed to be in compliance with the requirements of section 208, solely and exclusively with respect to data subject to the requirements of such regulations, part, title, or Act. Not later than 1 year after the date of enactment of this Act, the Commission shall issue guidance describing the implementation of this paragraph.

(b) Preemption of State laws.—

(1) IN GENERAL.—No State or political subdivision of a State may adopt, maintain, enforce, prescribe, or continue in effect any law, regulation, rule, standard, requirement, or other provision having the force and effect of law of any State, or political subdivision of a State, covered by the provisions of this Act, or a rule, regulation, or requirement promulgated under this Act.

(2) STATE LAW PRESERVATION.—Paragraph (1) may not be construed to preempt, displace, or supplant the following State laws, rules, regulations, or requirements:

(A) Consumer protection laws of general applicability, such as laws regulating deceptive, unfair, or unconscionable practices, except that the fact of a violation of this Act or a regulation promulgated under this Act may not be pleaded as an element of any violation of such a law.

(B) Civil rights laws.

(C) Provisions of laws, in so far as, that govern the privacy rights or other protections of employees, employee information, students, or student information.

(D) Laws that address notification requirements in the event of a data breach.

(E) Contract or tort law.

(F) Criminal laws.

(G) Civil laws governing fraud, theft (including identity theft), unauthorized access to information or electronic devices, unauthorized use of information, malicious behavior, or similar provisions of law.

(H) Civil laws regarding cyberstalking, cyberbullying, nonconsensual pornography, sexual harassment, child abuse material, child pornography, child abduction or attempted child abduction, coercion or enticement of a child for sexual activity, or child sex trafficking.

(I) Public safety or sector specific laws unrelated to privacy or security.

(J) Provisions of law, insofar as such provisions address public records, criminal justice information systems, arrest records, mug shots, conviction records, or non-conviction records.

(K) Provisions of law, insofar as such provisions address banking records, financial records, tax records, Social Security numbers, credit cards, consumer and credit reporting and investigations, credit repair, credit clinics, or check-cashing services.

(L) Provisions of law, insofar as such provisions address facial recognition or facial recognition technologies, electronic surveillance, wiretapping, or telephone monitoring.

(M) The Biometric Information Privacy Act (740 ICLS 14 et seq.) and the Genetic Information Privacy Act (410 ILCS 513 et seq.).

(N) Provisions of laws, in so far as, such provisions to address unsolicited email or text messages, telephone solicitation, or caller identification.

(O) Provisions of laws, in so far as, such provisions address health information, medical information, medical records, HIV status, or HIV testing.

(P) Provisions of laws, in so far as, such provisions pertain to public health activities, reporting, data, or services.

(Q) Provisions of law, insofar as such provisions address the confidentiality of library records.

(R) Section 1798.150 of the California Civil Code (as amended on November 3, 2020 by initiative Proposition 24, Section 16).

(S) Laws pertaining to the use of encryption as a means of providing data security.

(3) CPPA ENFORCEMENT.—Notwithstanding any other provisions of law, the California Privacy Protection Agency established under 1798.199.10(a) of the California Privacy Rights Act may enforce this Act, in the same manner, it would otherwise enforce the California Consumer Privacy Act, Section 1798.1050 et. seq.

(4) NONAPPLICATION OF FCC PRIVACY LAWS AND REGULATIONS TO CERTAIN COVERED ENTITIES.—Notwithstanding any other provision of law, sections 222, 338(i), and 631 of the Communications Act of 1934 (47 U.S.C. 222; 338(i); 551), and any regulations and orders promulgated by the Federal Communications Commission under any such section, do not apply to any covered entity with respect to the collection, processing, transfer, or security of covered data or its equivalent, and the related privacy and data security activities of a covered entity that would otherwise be regulated under such sections shall be governed exclusively by the provisions of this Act, except for—

(A) any emergency services, as defined in section 7 of the Wireless Communications and Public Safety Act of 1999 (47 U.S.C. 615b);

(B) subsections (b) and (g) of section 222 of the Communications Act of 1934 (47 U.S.C. 222); and

(C) any obligation of an international treaty related to the exchange of traffic implemented and enforced by the Federal Communications Commission.

(c) Preservation of common law or statutory causes of action for civil relief.—Nothing in this Act, nor any amendment, standard, rule, requirement, assessment, or regulation promulgated under this Act, may be construed to preempt, displace, or supplant any Federal or State common law rights or remedies, or any statute creating a remedy for civil relief, including any cause of action for personal injury, wrongful death, property damage, or other financial, physical, reputational, or psychological injury based in negligence, strict liability, products liability, failure to warn, an objectively offensive intrusion into the private affairs or concerns of the individual, or any other legal theory of liability under any Federal or State common law, or any State statutory law.

SEC. 405. Severability.

If any provision of this Act, or the application thereof to any person or circumstance, is held invalid, the remainder of this Act, and the application of such provision to other persons not similarly situated or to other circumstances, shall not be affected by the invalidation.

SEC. 406. COPPA.

(a) In general.—Nothing in this Act may be construed to relieve or change any obligation that a covered entity or other person may have under the Children’s Online Privacy Protection Act of 1998 (15 U.S.C. 6501 et seq.).

(b) Updated regulations.—Not later than 180 days after the date of enactment of this Act, the Commission shall amend its rules issued pursuant to the regulations promulgated by the Commission under the Children’s Online Privacy Protection Act of 1998 (15 U.S.C. 6501 et seq.) to make reference to the additional requirements placed on covered entities under this Act, in addition to the requirements under the Children’s Online Privacy Protection Act of 1998 that may already apply to certain covered entities.

SEC. 407. Authorization of appropriations.

There are authorized to be appropriated to the Commission such sums as may be necessary to carry out this Act.

SEC. 408. Effective date.

This Act shall take effect on the date that is 180 days after the date of enactment of this Act.

12Nov/24

Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. October 30, 2023.

Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. JOSEPH R. BIDEN JR. THE WHITE HOUSE,
October 30, 2023.

   By the authority vested in me as President by the Constitution and the laws of the United States of America, it is hereby ordered as follows:

     Section 1.  Purpose.  Artificial intelligence (AI) holds extraordinary potential for both promise and peril.  Responsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure.  At the same time, irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security.  Harnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks.  This endeavor demands a society-wide effort that includes government, the private sector, academia, and civil society.

     My Administration places the highest urgency on governing the development and use of AI safely and responsibly, and is therefore advancing a coordinated, Federal Government-wide approach to doing so.  The rapid speed at which AI capabilities are advancing compels the United States to lead in this moment for the sake of our security, economy, and society.

     In the end, AI reflects the principles of the people who build it, the people who use it, and the data upon which it is built.  I firmly believe that the power of our ideals; the foundations of our society; and the creativity, diversity, and decency of our people are the reasons that America thrived in past eras of rapid change.  They are the reasons we will succeed again in this moment.  We are more than capable of harnessing AI for justice, security, and opportunity for all.

     Sec. 2.  Policy and Principles.  It is the policy of my Administration to advance and govern the development and use of AI in accordance with eight guiding principles and priorities.  When undertaking the actions set forth in this order, executive departments and agencies (agencies) shall, as appropriate and consistent with applicable law, adhere to these principles, while, as feasible, taking into account the views of other agencies, industry, members of academia, civil society, labor unions, international allies and partners, and other relevant organizations:

     (a)  Artificial Intelligence must be safe and secure.  Meeting this goal requires robust, reliable, repeatable, and standardized evaluations of AI systems, as well as policies, institutions, and, as appropriate, other mechanisms to test, understand, and mitigate risks from these systems before they are put to use.  It also requires addressing AI systems’ most pressing security risks — including with respect to biotechnology, cybersecurity, critical infrastructure, and other national security dangers — while navigating AI’s opacity and complexity.  Testing and evaluations, including post-deployment performance monitoring, will help ensure that AI systems function as intended, are resilient against misuse or dangerous modifications, are ethically developed and operated in a secure manner, and are compliant with applicable Federal laws and policies.  Finally, my Administration will help develop effective labeling and content provenance mechanisms, so that Americans are able to determine when content is generated using AI and when it is not.  These actions will provide a vital foundation for an approach that addresses AI’s risks without unduly reducing its benefits. 

     (b)  Promoting responsible innovation, competition, and collaboration will allow the United States to lead in AI and unlock the technology’s potential to solve some of society’s most difficult challenges.  This effort requires investments in AI-related education, training, development, research, and capacity, while simultaneously tackling novel intellectual property (IP) questions and other problems to protect inventors and creators.  Across the Federal Government, my Administration will support programs to provide Americans the skills they need for the age of AI and attract the world’s AI talent to our shores — not just to study, but to stay — so that the companies and technologies of the future are made in America.  The Federal Government will promote a fair, open, and competitive ecosystem and marketplace for AI and related technologies so that small developers and entrepreneurs can continue to drive innovation.  Doing so requires stopping unlawful collusion and addressing risks from dominant firms’ use of key assets such as semiconductors, computing power, cloud storage, and data to disadvantage competitors, and it requires supporting a marketplace that harnesses the benefits of AI to provide new opportunities for small businesses, workers, and entrepreneurs. 

     (c)  The responsible development and use of AI require a commitment to supporting American workers.  As AI creates new jobs and industries, all workers need a seat at the table, including through collective bargaining, to ensure that they benefit from these opportunities.  My Administration will seek to adapt job training and education to support a diverse workforce and help provide access to opportunities that AI creates.  In the workplace itself, AI should not be deployed in ways that undermine rights, worsen job quality, encourage undue worker surveillance, lessen market competition, introduce new health and safety risks, or cause harmful labor-force disruptions.  The critical next steps in AI development should be built on the views of workers, labor unions, educators, and employers to support responsible uses of AI that improve workers’ lives, positively augment human work, and help all people safely enjoy the gains and opportunities from technological innovation.

     (d)  Artificial Intelligence policies must be consistent with my Administration’s dedication to advancing equity and civil rights.  My Administration cannot — and will not — tolerate the use of AI to disadvantage those who are already too often denied equal opportunity and justice.  From hiring to housing to healthcare, we have seen what happens when AI use deepens discrimination and bias, rather than improving quality of life.  Artificial Intelligence systems deployed irresponsibly have reproduced and intensified existing inequities, caused new types of harmful discrimination, and exacerbated online and physical harms.  My Administration will build on the important steps that have already been taken — such as issuing the Blueprint for an AI Bill of Rights, the AI Risk Management Framework, and Executive Order 14091 of February 16, 2023 (Further Advancing Racial Equity and Support for Underserved Communities Through the Federal Government) — in seeking to ensure that AI complies with all Federal laws and to promote robust technical evaluations, careful oversight, engagement with affected communities, and rigorous regulation.  It is necessary to hold those developing and deploying AI accountable to standards that protect against unlawful discrimination and abuse, including in the justice system and the Federal Government.  Only then can Americans trust AI to advance civil rights, civil liberties, equity, and justice for all.

     (e)  The interests of Americans who increasingly use, interact with, or purchase AI and AI-enabled products in their daily lives must be protected.  Use of new technologies, such as AI, does not excuse organizations from their legal obligations, and hard-won consumer protections are more important than ever in moments of technological change.  The Federal Government will enforce existing consumer protection laws and principles and enact appropriate safeguards against fraud, unintended bias, discrimination, infringements on privacy, and other harms from AI.  Such protections are especially important in critical fields like healthcare, financial services, education, housing, law, and transportation, where mistakes by or misuse of AI could harm patients, cost consumers or small businesses, or jeopardize safety or rights.  At the same time, my Administration will promote responsible uses of AI that protect consumers, raise the quality of goods and services, lower their prices, or expand selection and availability.

     (f)  Americans’ privacy and civil liberties must be protected as AI continues advancing.  Artificial Intelligence is making it easier to extract, re-identify, link, infer, and act on sensitive information about people’s identities, locations, habits, and desires.  Artificial Intelligence’s capabilities in these areas can increase the risk that personal data could be exploited and exposed.  To combat this risk, the Federal Government will ensure that the collection, use, and retention of data is lawful, is secure, and mitigates privacy and confidentiality risks.  Agencies shall use available policy and technical tools, including privacy-enhancing technologies (PETs) where appropriate, to protect privacy and to combat the broader legal and societal risks — including the chilling of First Amendment rights — that result from the improper collection and use of people’s data.

     (g)  It is important to manage the risks from the Federal Government’s own use of AI and increase its internal capacity to regulate, govern, and support responsible use of AI to deliver better results for Americans.  These efforts start with people, our Nation’s greatest asset.  My Administration will take steps to attract, retain, and develop public service-oriented AI professionals, including from underserved communities, across disciplines — including technology, policy, managerial, procurement, regulatory, ethical, governance, and legal fields — and ease AI professionals’ path into the Federal Government to help harness and govern AI.  The Federal Government will work to ensure that all members of its workforce receive adequate training to understand the benefits, risks, and limitations of AI for their job functions, and to modernize Federal Government information technology infrastructure, remove bureaucratic obstacles, and ensure that safe and rights-respecting AI is adopted, deployed, and used. 

     (h)  The Federal Government should lead the way to global societal, economic, and technological progress, as the United States has in previous eras of disruptive innovation and change.  This leadership is not measured solely by the technological advancements our country makes.  Effective leadership also means pioneering those systems and safeguards needed to deploy technology responsibly — and building and promoting those safeguards with the rest of the world.  My Administration will engage with international allies and partners in developing a framework to manage AI’s risks, unlock AI’s potential for good, and promote common approaches to shared challenges.  The Federal Government will seek to promote responsible AI safety and security principles and actions with other nations, including our competitors, while leading key global conversations and collaborations to ensure that AI benefits the whole world, rather than exacerbating inequities, threatening human rights, and causing other harms. 

     Sec. 3.  Definitions.  For purposes of this order:

     (a)  The term “agency” means each agency described in 44 U.S.C. 3502(1), except for the independent regulatory agencies described in 44 U.S.C. 3502(5).

     (b)  The term “artificial intelligence” or “AI” has the meaning set forth in 15 U.S.C. 9401(3):  a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.  Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.

     (c)  The term “AI model” means a component of an information system that implements AI technology and uses computational, statistical, or machine-learning techniques to produce outputs from a given set of inputs.

     (d)  The term “AI red-teaming” means a structured testing effort to find flaws and vulnerabilities in an AI system, often in a controlled environment and in collaboration with developers of AI.  Artificial Intelligence red-teaming is most often performed by dedicated “red teams” that adopt adversarial methods to identify flaws and vulnerabilities, such as harmful or discriminatory outputs from an AI system, unforeseen or undesirable system behaviors, limitations, or potential risks associated with the misuse of the system.

     (e)  The term “AI system” means any data system, software, hardware, application, tool, or utility that operates in whole or in part using AI.

     (f)  The term “commercially available information” means any information or data about an individual or group of individuals, including an individual’s or group of individuals’ device or location, that is made available or obtainable and sold, leased, or licensed to the general public or to governmental or non-governmental entities. 

     (g)  The term “crime forecasting” means the use of analytical techniques to attempt to predict future crimes or crime-related information.  It can include machine-generated predictions that use algorithms to analyze large volumes of data, as well as other forecasts that are generated without machines and based on statistics, such as historical crime statistics.

     (h)  The term “critical and emerging technologies” means those technologies listed in the February 2022 Critical and Emerging Technologies List Update issued by the National Science and Technology Council (NSTC), as amended by subsequent updates to the list issued by the NSTC. 

     (i)  The term “critical infrastructure” has the meaning set forth in section 1016(e) of the USA PATRIOT Act of 2001, 42 U.S.C. 5195c(e).

     (j)  The term “differential-privacy guarantee” means protections that allow information about a group to be shared while provably limiting the improper access, use, or disclosure of personal information about particular entities.  

     (k)  The term “dual-use foundation model” means an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters, such as by:

          (i)    substantially lowering the barrier of entry for non-experts to design, synthesize, acquire, or use chemical, biological, radiological, or nuclear (CBRN) weapons;

          (ii)   enabling powerful offensive cyber operations through automated vulnerability discovery and exploitation against a wide range of potential targets of cyber attacks; or

          (iii)  permitting the evasion of human control or oversight through means of deception or obfuscation.

Models meet this definition even if they are provided to end users with technical safeguards that attempt to prevent users from taking advantage of the relevant unsafe capabilities. 

     (l)  The term “Federal law enforcement agency” has the meaning set forth in section 21(a) of Executive Order 14074 of May 25, 2022 (Advancing Effective, Accountable Policing and Criminal Justice Practices To Enhance Public Trust and Public Safety).

     (m)  The term “floating-point operation” means any mathematical operation or assignment involving floating-point numbers, which are a subset of the real numbers typically represented on computers by an integer of fixed precision scaled by an integer exponent of a fixed base.

     (n)  The term “foreign person” has the meaning set forth in section 5(c) of Executive Order 13984 of January 19, 2021 (Taking Additional Steps To Address the National Emergency With Respect to Significant Malicious Cyber-Enabled Activities).

     (o)  The terms “foreign reseller” and “foreign reseller of United States Infrastructure as a Service Products” mean a foreign person who has established an Infrastructure as a Service Account to provide Infrastructure as a Service Products subsequently, in whole or in part, to a third party.

     (p)  The term “generative AI” means the class of AI models that emulate the structure and characteristics of input data in order to generate derived synthetic content.  This can include images, videos, audio, text, and other digital content.

     (q)  The terms “Infrastructure as a Service Product,” “United States Infrastructure as a Service Product,” “United States Infrastructure as a Service Provider,” and “Infrastructure as a Service Account” each have the respective meanings given to those terms in section 5 of Executive Order 13984.

     (r)  The term “integer operation” means any mathematical operation or assignment involving only integers, or whole numbers expressed without a decimal point.

     (s)  The term “Intelligence Community” has the meaning given to that term in section 3.5(h) of Executive Order 12333 of December 4, 1981 (United States Intelligence Activities), as amended. 

     (t)  The term “machine learning” means a set of techniques that can be used to train AI algorithms to improve performance at a task based on data.

     (u)  The term “model weight” means a numerical parameter within an AI model that helps determine the model’s outputs in response to inputs.

     (v)  The term “national security system” has the meaning set forth in 44 U.S.C. 3552(b)(6).

     (w)  The term “omics” means biomolecules, including nucleic acids, proteins, and metabolites, that make up a cell or cellular system.

     (x)  The term “Open RAN” means the Open Radio Access Network approach to telecommunications-network standardization adopted by the O-RAN Alliance, Third Generation Partnership Project, or any similar set of published open standards for multi-vendor network equipment interoperability.

     (y)  The term “personally identifiable information” has the meaning set forth in Office of Management and Budget (OMB) Circular No. A-130.

     (z)  The term “privacy-enhancing technology” means any software or hardware solution, technical process, technique, or other technological means of mitigating privacy risks arising from data processing, including by enhancing predictability, manageability, disassociability, storage, security, and confidentiality.  These technological means may include secure multiparty computation, homomorphic encryption, zero-knowledge proofs, federated learning, secure enclaves, differential privacy, and synthetic-data-generation tools.  This is also sometimes referred to as “privacy-preserving technology.”

     (aa)  The term “privacy impact assessment” has the meaning set forth in OMB Circular No. A-130.

     (bb)  The term “Sector Risk Management Agency” has the meaning set forth in 6 U.S.C. 650(23).

     (cc)  The term “self-healing network” means a telecommunications network that automatically diagnoses and addresses network issues to permit self-restoration.

     (dd)  The term “synthetic biology” means a field of science that involves redesigning organisms, or the biomolecules of organisms, at the genetic level to give them new characteristics.  Synthetic nucleic acids are a type of biomolecule redesigned through synthetic-biology methods.

     (ee)  The term “synthetic content” means information, such as images, videos, audio clips, and text, that has been significantly modified or generated by algorithms, including by AI.

     (ff)  The term “testbed” means a facility or mechanism equipped for conducting rigorous, transparent, and replicable testing of tools and technologies, including AI and PETs, to help evaluate the functionality, usability, and performance of those tools or technologies.

     (gg)  The term “watermarking” means the act of embedding information, which is typically difficult to remove, into outputs created by AI — including into outputs such as photos, videos, audio clips, or text — for the purposes of verifying the authenticity of the output or the identity or characteristics of its provenance, modifications, or conveyance.
     Sec. 4.  Ensuring the Safety and Security of AI Technology.

     4.1.  Developing Guidelines, Standards, and Best Practices for AI Safety and Security.  (a)  Within 270 days of the date of this order, to help ensure the development of safe, secure, and trustworthy AI systems, the Secretary of Commerce, acting through the Director of the National Institute of Standards and Technology (NIST), in coordination with the Secretary of Energy, the Secretary of Homeland Security, and the heads of other relevant agencies as the Secretary of Commerce may deem appropriate, shall:

          (i)   Establish guidelines and best practices, with the aim of promoting consensus industry standards, for developing and deploying safe, secure, and trustworthy AI systems, including:

               (A)  developing a companion resource to the AI Risk Management Framework, NIST AI 100-1, for generative AI;

               (B)  developing a companion resource to the Secure Software Development Framework to incorporate secure development practices for generative AI and for dual-use foundation models; and

               (C)  launching an initiative to create guidance and benchmarks for evaluating and auditing AI capabilities, with a focus on capabilities through which AI could cause harm, such as in the areas of cybersecurity and biosecurity.

          (ii)  Establish appropriate guidelines (except for AI used as a component of a national security system), including appropriate procedures and processes, to enable developers of AI, especially of dual-use foundation models, to conduct AI red-teaming tests to enable deployment of safe, secure, and trustworthy systems.  These efforts shall include:

               (A)  coordinating or developing guidelines related to assessing and managing the safety, security, and trustworthiness of dual-use foundation models; and

               (B)  in coordination with the Secretary of Energy and the Director of the National Science Foundation (NSF), developing and helping to ensure the availability of testing environments, such as testbeds, to support the development of safe, secure, and trustworthy AI technologies, as well as to support the design, development, and deployment of associated PETs, consistent with section 9(b) of this order. 

     (b)  Within 270 days of the date of this order, to understand and mitigate AI security risks, the Secretary of Energy, in coordination with the heads of other Sector Risk Management Agencies (SRMAs) as the Secretary of Energy may deem appropriate, shall develop and, to the extent permitted by law and available appropriations, implement a plan for developing the Department of Energy’s AI model evaluation tools and AI testbeds.  The Secretary shall undertake this work using existing solutions where possible, and shall develop these tools and AI testbeds to be capable of assessing near-term extrapolations of AI systems’ capabilities.  At a minimum, the Secretary shall develop tools to evaluate AI capabilities to generate outputs that may represent nuclear, nonproliferation, biological, chemical, critical infrastructure, and energy-security threats or hazards.  The Secretary shall do this work solely for the purposes of guarding against these threats, and shall also develop model guardrails that reduce such risks.  The Secretary shall, as appropriate, consult with private AI laboratories, academia, civil society, and third-party evaluators, and shall use existing solutions.

     4.2.  Ensuring Safe and Reliable AI.  (a)  Within 90 days of the date of this order, to ensure and verify the continuous availability of safe, reliable, and effective AI in accordance with the Defense Production Act, as amended, 50 U.S.C. 4501 et seq., including for the national defense and the protection of critical infrastructure, the Secretary of Commerce shall require:

          (i)   Companies developing or demonstrating an intent to develop potential dual-use foundation models to provide the Federal Government, on an ongoing basis, with information, reports, or records regarding the following:

               (A)  any ongoing or planned activities related to training, developing, or producing dual-use foundation models, including the physical and cybersecurity protections taken to assure the integrity of that training process against sophisticated threats;

               (B)  the ownership and possession of the model weights of any dual-use foundation models, and the physical and cybersecurity measures taken to protect those model weights; and

               (C)  the results of any developed dual-use foundation model’s performance in relevant AI red-team testing based on guidance developed by NIST pursuant to subsection 4.1(a)(ii) of this section, and a description of any associated measures the company has taken to meet safety objectives, such as mitigations to improve performance on these red-team tests and strengthen overall model security.  Prior to the development of guidance on red-team testing standards by NIST pursuant to subsection 4.1(a)(ii) of this section, this description shall include the results of any red-team testing that the company has conducted relating to lowering the barrier to entry for the development, acquisition, and use of biological weapons by non-state actors; the discovery of software vulnerabilities and development of associated exploits; the use of software or tools to influence real or virtual events; the possibility for self-replication or propagation; and associated measures to meet safety objectives; and

          (ii)  Companies, individuals, or other organizations or entities that acquire, develop, or possess a potential large-scale computing cluster to report any such acquisition, development, or possession, including the existence and location of these clusters and the amount of total computing power available in each cluster.

     (b)  The Secretary of Commerce, in consultation with the Secretary of State, the Secretary of Defense, the Secretary of Energy, and the Director of National Intelligence, shall define, and thereafter update as needed on a regular basis, the set of technical conditions for models and computing clusters that would be subject to the reporting requirements of subsection 4.2(a) of this section.  Until such technical conditions are defined, the Secretary shall require compliance with these reporting requirements for:

          (i)   any model that was trained using a quantity of computing power greater than 1026 integer or floating-point operations, or using primarily biological sequence data and using a quantity of computing power greater than 1023 integer or floating-point operations; and

          (ii)  any computing cluster that has a set of machines physically co-located in a single datacenter, transitively connected by data center networking of over 100 Gbit/s, and having a theoretical maximum computing capacity of 1020 integer or floating-point operations per second for training AI.

     (c)  Because I find that additional steps must be taken to deal with the national emergency related to significant malicious cyber-enabled activities declared in Executive Order 13694 of April 1, 2015 (Blocking the Property of Certain Persons Engaging in Significant Malicious Cyber-Enabled Activities), as amended by Executive Order 13757 of December 28, 2016 (Taking Additional Steps to Address the National Emergency With Respect to Significant Malicious Cyber-Enabled Activities), and further amended by Executive Order 13984, to address the use of United States Infrastructure as a Service (IaaS) Products by foreign malicious cyber actors, including to impose additional record-keeping obligations with respect to foreign transactions and to assist in the investigation of transactions involving foreign malicious cyber actors, I hereby direct the Secretary of Commerce, within 90 days of the date of this order, to:

          (i)    Propose regulations that require United States IaaS Providers to submit a report to the Secretary of Commerce when a foreign person transacts with that United States IaaS Provider to train a large AI model with potential capabilities that could be used in malicious cyber-enabled activity (a “training run”).  Such reports shall include, at a minimum, the identity of the foreign person and the existence of any training run of an AI model meeting the criteria set forth in this section, or other criteria defined by the Secretary in regulations, as well as any additional information identified by the Secretary.

          (ii)   Include a requirement in the regulations proposed pursuant to subsection 4.2(c)(i) of this section that United States IaaS Providers prohibit any foreign reseller of their United States IaaS Product from providing those products unless such foreign reseller submits to the United States IaaS Provider a report, which the United States IaaS Provider must provide to the Secretary of Commerce, detailing each instance in which a foreign person transacts with the foreign reseller to use the United States IaaS Product to conduct a training run described in subsection 4.2(c)(i) of this section.  Such reports shall include, at a minimum, the information specified in subsection 4.2(c)(i) of this section as well as any additional information identified by the Secretary.

          (iii)  Determine the set of technical conditions for a large AI model to have potential capabilities that could be used in malicious cyber-enabled activity, and revise that determination as necessary and appropriate.  Until the Secretary makes such a determination, a model shall be considered to have potential capabilities that could be used in malicious cyber-enabled activity if it requires a quantity of computing power greater than 1026 integer or floating-point operations and is trained on a computing cluster that has a set of machines physically co-located in a single datacenter, transitively connected by data center networking of over 100 Gbit/s, and having a theoretical maximum compute capacity of 1020 integer or floating-point operations per second for training AI.   

     (d)  Within 180 days of the date of this order, pursuant to the finding set forth in subsection 4.2(c) of this section, the Secretary of Commerce shall propose regulations that require United States IaaS Providers to ensure that foreign resellers of United States IaaS Products verify the identity of any foreign person that obtains an IaaS account (account) from the foreign reseller.  These regulations shall, at a minimum:

          (i)    Set forth the minimum standards that a United States IaaS Provider must require of foreign resellers of its United States IaaS Products to verify the identity of a foreign person who opens an account or maintains an existing account with a foreign reseller, including:

               (A)  the types of documentation and procedures that foreign resellers of United States IaaS Products must require to verify the identity of any foreign person acting as a lessee or sub-lessee of these products or services;

               (B)  records that foreign resellers of United States IaaS Products must securely maintain regarding a foreign person that obtains an account, including information establishing:

                    (1)  the identity of such foreign person, including name and address;

                    (2)  the means and source of payment (including any associated financial institution and other identifiers such as credit card number, account number, customer identifier, transaction identifiers, or virtual currency wallet or wallet address identifier);

                    (3)  the electronic mail address and telephonic contact information used to verify a foreign person’s identity; and

                    (4)  the Internet Protocol addresses used for access or administration and the date and time of each such access or administrative action related to ongoing verification of such foreign person’s ownership of such an account; and

               (C)  methods that foreign resellers of United States IaaS Products must implement to limit all third-party access to the information described in this subsection, except insofar as such access is otherwise consistent with this order and allowed under applicable law;

          (ii)   Take into consideration the types of accounts maintained by foreign resellers of United States IaaS Products, methods of opening an account, and types of identifying information available to accomplish the objectives of identifying foreign malicious cyber actors using any such products and avoiding the imposition of an undue burden on such resellers; and

          (iii)  Provide that the Secretary of Commerce, in accordance with such standards and procedures as the Secretary may delineate and in consultation with the Secretary of Defense, the Attorney General, the Secretary of Homeland Security, and the Director of National Intelligence, may exempt a United States IaaS Provider with respect to any specific foreign reseller of their United States IaaS Products, or with respect to any specific type of account or lessee, from the requirements of any regulation issued pursuant to this subsection.  Such standards and procedures may include a finding by the Secretary that such foreign reseller, account, or lessee complies with security best practices to otherwise deter abuse of United States IaaS Products.

     (e)  The Secretary of Commerce is hereby authorized to take such actions, including the promulgation of rules and regulations, and to employ all powers granted to the President by the International Emergency Economic Powers Act, 50 U.S.C. 1701 et seq., as may be necessary to carry out the purposes of subsections 4.2(c) and (d) of this section.  Such actions may include a requirement that United States IaaS Providers require foreign resellers of United States IaaS Products to provide United States IaaS Providers verifications relative to those subsections.

     4.3.  Managing AI in Critical Infrastructure and in Cybersecurity.  (a)  To ensure the protection of critical
infrastructure, the following actions shall be taken:

          (i)    Within 90 days of the date of this order, and at least annually thereafter, the head of each agency with relevant regulatory authority over critical infrastructure and the heads of relevant SRMAs, in coordination with the Director of the Cybersecurity and Infrastructure Security Agency within the Department of Homeland Security for consideration of cross-sector risks, shall evaluate and provide to the Secretary of Homeland Security an assessment of potential risks related to the use of AI in critical infrastructure sectors involved, including ways in which deploying AI may make critical infrastructure systems more vulnerable to critical failures, physical attacks, and cyber attacks, and shall consider ways to mitigate these vulnerabilities.  Independent regulatory agencies are encouraged, as they deem appropriate, to contribute to sector-specific risk assessments.

          (ii)   Within 150 days of the date of this order, the Secretary of the Treasury shall issue a public report on best practices for financial institutions to manage AI-specific cybersecurity risks.

          (iii)  Within 180 days of the date of this order, the Secretary of Homeland Security, in coordination with the Secretary of Commerce and with SRMAs and other regulators as determined by the Secretary of Homeland Security, shall incorporate as appropriate the AI Risk Management Framework, NIST AI 100-1, as well as other appropriate security guidance, into relevant safety and security guidelines for use by critical infrastructure owners and operators.

          (iv)   Within 240 days of the completion of the guidelines described in subsection 4.3(a)(iii) of this section, the Assistant to the President for National Security Affairs and the Director of OMB, in consultation with the Secretary of Homeland Security, shall coordinate work by the heads of agencies with authority over critical infrastructure to develop and take steps for the Federal Government to mandate such guidelines, or appropriate portions thereof, through regulatory or other appropriate action.  Independent regulatory agencies are encouraged, as they deem appropriate, to consider whether to mandate guidance through regulatory action in their areas of authority and responsibility.

          (v)    The Secretary of Homeland Security shall establish an Artificial Intelligence Safety and Security Board as an advisory committee pursuant to section 871 of the Homeland Security Act of 2002 (Public Law 107-296).  The Advisory Committee shall include AI experts from the private sector, academia, and government, as appropriate, and provide to the Secretary of Homeland Security and the Federal Government’s critical infrastructure community advice, information, or recommendations for improving security, resilience, and incident response related to AI usage in critical infrastructure.

     (b)  To capitalize on AI’s potential to improve United States cyber defenses:

          (i)    The Secretary of Defense shall carry out the actions described in subsections 4.3(b)(ii) and (iii) of this section for national security systems, and the Secretary of Homeland Security shall carry out these actions for non-national security systems.  Each shall do so in consultation with the heads of other relevant agencies as the Secretary of Defense and the Secretary of Homeland Security may deem appropriate. 

          (ii)   As set forth in subsection 4.3(b)(i) of this section, within 180 days of the date of this order, the Secretary of Defense and the Secretary of Homeland Security shall, consistent with applicable law, each develop plans for, conduct, and complete an operational pilot project to identify, develop, test, evaluate, and deploy AI capabilities, such as large-language models, to aid in the discovery and remediation of vulnerabilities in critical United States Government software, systems, and networks.

          (iii)  As set forth in subsection 4.3(b)(i) of this section, within 270 days of the date of this order, the Secretary of Defense and the Secretary of Homeland Security shall each provide a report to the Assistant to the President for National Security Affairs on the results of actions taken pursuant to the plans and operational pilot projects required by subsection 4.3(b)(ii) of this section, including a description of any vulnerabilities found and fixed through the development and deployment of AI capabilities and any lessons learned on how to identify, develop, test, evaluate, and deploy AI capabilities effectively for cyber defense.

     4.4.  Reducing Risks at the Intersection of AI and CBRN Threats.  (a)  To better understand and mitigate the risk of AI being misused to assist in the development or use of CBRN threats — with a particular focus on biological weapons — the following actions shall be taken: 

          (i)   Within 180 days of the date of this order, the Secretary of Homeland Security, in consultation with the Secretary of Energy and the Director of the Office of Science and Technology Policy (OSTP), shall evaluate the potential for AI to be misused to enable the development or production of CBRN threats, while also considering the benefits and application of AI to counter these threats, including, as appropriate, the results of work conducted under section 8(b) of this order.  The Secretary of Homeland Security shall:

               (A)  consult with experts in AI and CBRN issues from the Department of Energy, private AI laboratories, academia, and third-party model evaluators, as appropriate, to evaluate AI model capabilities to present CBRN threats — for the sole purpose of guarding against those threats — as well as options for minimizing the risks of AI model misuse to generate or exacerbate those threats; and

               (B)  submit a report to the President that describes the progress of these efforts, including an assessment of the types of AI models that may present CBRN risks to the United States, and that makes recommendations for regulating or overseeing the training, deployment, publication, or use of these models, including requirements for safety evaluations and guardrails for mitigating potential threats to national security.

          (ii)  Within 120 days of the date of this order, the Secretary of Defense, in consultation with the Assistant to the President for National Security Affairs and the Director of OSTP, shall enter into a contract with the National Academies of Sciences, Engineering, and Medicine to conduct — and submit to the Secretary of Defense, the Assistant to the President for National Security Affairs, the Director of the Office of Pandemic Preparedness and Response Policy, the Director of OSTP, and the Chair of the Chief Data Officer Council — a study that:

               (A)  assesses the ways in which AI can increase biosecurity risks, including risks from generative AI models trained on biological data, and makes recommendations on how to mitigate these risks;

               (B)  considers the national security implications of the use of data and datasets, especially those associated with pathogens and omics studies, that the United States Government hosts, generates, funds the creation of, or otherwise owns, for the training of generative AI models, and makes recommendations on how to mitigate the risks related to the use of these data and datasets;

               (C)  assesses the ways in which AI applied to biology can be used to reduce biosecurity risks, including recommendations on opportunities to coordinate data and high-performance computing resources; and

               (D)  considers additional concerns and opportunities at the intersection of AI and synthetic biology that the Secretary of Defense deems appropriate.

     (b)  To reduce the risk of misuse of synthetic nucleic acids, which could be substantially increased by AI’s capabilities in this area, and improve biosecurity measures for the nucleic acid synthesis industry, the following actions shall be taken:

          (i)    Within 180 days of the date of this order, the Director of OSTP, in consultation with the Secretary of State, the Secretary of Defense, the Attorney General, the Secretary of Commerce, the Secretary of Health and Human Services (HHS), the Secretary of Energy, the Secretary of Homeland Security, the Director of National Intelligence, and the heads of other relevant agencies as the Director of OSTP may deem appropriate, shall establish a framework, incorporating, as appropriate, existing United States Government guidance, to encourage providers of synthetic nucleic acid sequences to implement comprehensive, scalable, and verifiable synthetic nucleic acid procurement screening mechanisms, including standards and recommended incentives.  As part of this framework, the Director of OSTP shall:

               (A)  establish criteria and mechanisms for ongoing identification of biological sequences that could be used in a manner that would pose a risk to the national security of the United States; and

               (B)  determine standardized methodologies and tools for conducting and verifying the performance of sequence synthesis procurement screening, including customer screening approaches to support due diligence with respect to managing security risks posed by purchasers of biological sequences identified in subsection 4.4(b)(i)(A) of this section, and processes for the reporting of concerning activity to enforcement entities.

          (ii)   Within 180 days of the date of this order, the Secretary of Commerce, acting through the Director of NIST, in coordination with the Director of OSTP, and in consultation with the Secretary of State, the Secretary of HHS, and the heads of other relevant agencies as the Secretary of Commerce may deem appropriate, shall initiate an effort to engage with industry and relevant stakeholders, informed by the framework developed under subsection 4.4(b)(i) of this section, to develop and refine for possible use by synthetic nucleic acid sequence providers:

               (A)  specifications for effective nucleic acid synthesis procurement screening;

               (B)  best practices, including security and access controls, for managing sequence-of-concern databases to support such screening;

               (C)  technical implementation guides for effective screening; and

               (D)  conformity-assessment best practices and mechanisms.

          (iii)  Within 180 days of the establishment of the framework pursuant to subsection 4.4(b)(i) of this section, all agencies that fund life-sciences research shall, as appropriate and consistent with applicable law, establish that, as a requirement of funding, synthetic nucleic acid procurement is conducted through providers or manufacturers that adhere to the framework, such as through an attestation from the provider or manufacturer.  The Assistant to the President for National Security Affairs and the Director of OSTP shall coordinate the process of reviewing such funding requirements to facilitate consistency in implementation of the framework across funding agencies.

          (iv)   In order to facilitate effective implementation of the measures described in subsections 4.4(b)(i)-(iii) of this section, the Secretary of Homeland Security, in consultation with the heads of other relevant agencies as the Secretary of Homeland Security may deem appropriate, shall:

               (A)  within 180 days of the establishment of the framework pursuant to subsection 4.4(b)(i) of this section, develop a framework to conduct structured evaluation and stress testing of nucleic acid synthesis procurement screening, including the systems developed in accordance with subsections 4.4(b)(i)-(ii) of this section and implemented by providers of synthetic nucleic acid sequences; and

               (B)  following development of the framework pursuant to subsection 4.4(b)(iv)(A) of this section, submit an annual report to the Assistant to the President for National Security Affairs, the Director of the Office of Pandemic Preparedness and Response Policy, and the Director of OSTP on any results of the activities conducted pursuant to subsection 4.4(b)(iv)(A) of this section, including recommendations, if any, on how to strengthen nucleic acid synthesis procurement screening, including customer screening systems.

     4.5.  Reducing the Risks Posed by Synthetic Content.

 To foster capabilities for identifying and labeling synthetic content produced by AI systems, and to establish the authenticity and provenance of digital content, both synthetic and not synthetic, produced by the Federal Government or on its behalf:

     (a)  Within 240 days of the date of this order, the Secretary of Commerce, in consultation with the heads of other relevant agencies as the Secretary of Commerce may deem appropriate, shall submit a report to the Director of OMB and the Assistant to the President for National Security Affairs identifying the existing standards, tools, methods, and practices, as well as the potential development of further science-backed standards and techniques, for:

          (i)    authenticating content and tracking its provenance;

          (ii)   labeling synthetic content, such as using watermarking;

          (iii)  detecting synthetic content;

          (iv)   preventing generative AI from producing child sexual abuse material or producing non-consensual intimate imagery of real individuals (to include intimate digital depictions of the body or body parts of an identifiable individual);

          (v)    testing software used for the above purposes; and

          (vi)   auditing and maintaining synthetic content.

     (b)  Within 180 days of submitting the report required under subsection 4.5(a) of this section, and updated periodically thereafter, the Secretary of Commerce, in coordination with the Director of OMB, shall develop guidance regarding the existing tools and practices for digital content authentication and synthetic content detection measures.  The guidance shall include measures for the purposes listed in subsection 4.5(a) of this section.

     (c)  Within 180 days of the development of the guidance required under subsection 4.5(b) of this section, and updated periodically thereafter, the Director of OMB, in consultation with the Secretary of State; the Secretary of Defense; the Attorney General; the Secretary of Commerce, acting through the Director of NIST; the Secretary of Homeland Security; the Director of National Intelligence; and the heads of other agencies that the Director of OMB deems appropriate, shall — for the purpose of strengthening public confidence in the integrity of official United States Government digital content — issue guidance to agencies for labeling and authenticating such content that they produce or publish.

     (d)  The Federal Acquisition Regulatory Council shall, as appropriate and consistent with applicable law, consider amending the Federal Acquisition Regulation to take into account the guidance established under subsection 4.5 of this section.

     4.6.  Soliciting Input on Dual-Use Foundation Models with Widely Available Model Weights.  When the weights for a dual-use foundation model are widely available — such as when they are publicly posted on the Internet — there can be substantial benefits to innovation, but also substantial security risks, such as the removal of safeguards within the model.  To address the risks and potential benefits of dual-use foundation models with widely available weights, within 270 days of the date of this order, the Secretary of Commerce, acting through the Assistant Secretary of Commerce for Communications and Information, and in consultation with the Secretary of State, shall:

     (a)  solicit input from the private sector, academia, civil society, and other stakeholders through a public consultation process on potential risks, benefits, other implications, and appropriate policy and regulatory approaches related to dual-use foundation models for which the model weights are widely available, including:

          (i)    risks associated with actors fine-tuning dual-use foundation models for which the model weights are widely available or removing those models’ safeguards;

          (ii)   benefits to AI innovation and research, including research into AI safety and risk management, of dual-use foundation models for which the model weights are widely available; and

          (iii)  potential voluntary, regulatory, and international mechanisms to manage the risks and maximize the benefits of dual-use foundation models for which the model weights are widely available; and

     (b)  based on input from the process described in subsection 4.6(a) of this section, and in consultation with the heads of other relevant agencies as the Secretary of Commerce deems appropriate, submit a report to the President on the potential benefits, risks, and implications of dual-use foundation models for which the model weights are widely available, as well as policy and regulatory recommendations pertaining to those models.

     4.7.  Promoting Safe Release and Preventing the Malicious Use of Federal Data for AI Training.To improve public data access and manage security risks, and consistent with the objectives of the Open, Public, Electronic, and Necessary Government Data Act (title II of Public Law 115-435) to expand public access to Federal data assets in a machine-readable format while also taking into account security considerations, including the risk that information in an individual data asset in isolation does not pose a security risk but, when combined with other available information, may pose such a risk:

     (a)  within 270 days of the date of this order, the Chief Data Officer Council, in consultation with the Secretary of Defense, the Secretary of Commerce, the Secretary of Energy, the Secretary of Homeland Security, and the Director of National Intelligence, shall develop initial guidelines for performing security reviews, including reviews to identify and manage the potential security risks of releasing Federal data that could aid in the development of CBRN weapons as well as the development of autonomous offensive cyber capabilities, while also providing public access to Federal Government data in line with the goals stated in the Open, Public, Electronic, and Necessary Government Data Act (title II of Public Law 115-435); and

     (b)  within 180 days of the development of the initial guidelines required by subsection 4.7(a) of this section, agencies shall conduct a security review of all data assets in the comprehensive data inventory required under 44 U.S.C. 3511(a)(1) and (2)(B) and shall take steps, as appropriate and consistent with applicable law, to address the highest-priority potential security risks that releasing that data could raise with respect to CBRN weapons, such as the ways in which that data could be used to train AI systems.

     4.8.  Directing the Development of a National Security Memorandum.  To develop a coordinated executive branch approach to managing AI’s security risks, the Assistant to the President for National Security Affairs and the Assistant to the President and Deputy Chief of Staff for Policy shall oversee an interagency process with the purpose of, within 270 days of the date of this order, developing and submitting a proposed National Security Memorandum on AI to the President.  The memorandum shall address the governance of AI used as a component of a national security system or for military and intelligence purposes.  The memorandum shall take into account current efforts to govern the development and use of AI for national security systems.  The memorandum shall outline actions for the Department of Defense, the Department of State, other relevant agencies, and the Intelligence Community to address the national security risks and potential benefits posed by AI.  In particular, the memorandum shall:

     (a)  provide guidance to the Department of Defense, other relevant agencies, and the Intelligence Community on the continued adoption of AI capabilities to advance the United States national security mission, including through directing specific AI assurance and risk-management practices for national security uses of AI that may affect the rights or safety of United States persons and, in appropriate contexts, non-United States persons; and

     (b)  direct continued actions, as appropriate and consistent with applicable law, to address the potential use of AI systems by adversaries and other foreign actors in ways that threaten the capabilities or objectives of the Department of Defense or the Intelligence Community, or that otherwise pose risks to the security of the United States or its allies and partners.  

     Sec. 5. Promoting Innovation and Competition.

     5.1.  Attracting AI Talent to the United States.  (a)  Within 90 days of the date of this order, to attract and retain talent in AI and other critical and emerging technologies in the United States economy, the Secretary of State and the Secretary of Homeland Security shall take appropriate steps to:

          (i)   streamline processing times of visa petitions and applications, including by ensuring timely availability of visa appointments, for noncitizens who seek to travel to the United States to work on, study, or conduct research in AI or other critical and emerging technologies; and 

          (ii)  facilitate continued availability of visa appointments in sufficient volume for applicants with expertise in AI or other critical and emerging technologies.

     (b)  Within 120 days of the date of this order, the Secretary of State shall:

          (i)    consider initiating a rulemaking to establish new criteria to designate countries and skills on the Department of State’s Exchange Visitor Skills List as it relates to the 2-year foreign residence requirement for certain J-1 nonimmigrants, including those skills that are critical to the United States;

          (ii)   consider publishing updates to the 2009 Revised Exchange Visitor Skills List (74 FR 20108); and

          (iii)  consider implementing a domestic visa renewal program under 22 C.F.R. 41.111(b) to facilitate the ability of qualified applicants, including highly skilled talent in AI and critical and emerging technologies, to continue their work in the United States without unnecessary interruption.

     (c)  Within 180 days of the date of this order, the Secretary of State shall:

          (i)   consider initiating a rulemaking to expand the categories of nonimmigrants who qualify for the domestic visa renewal program covered under 22 C.F.R. 41.111(b) to include academic J-1 research scholars and F-1 students in science, technology, engineering, and mathematics (STEM); and

          (ii)  establish, to the extent permitted by law and available appropriations, a program to identify and attract top talent in AI and other critical and emerging technologies at universities, research institutions, and the private sector overseas, and to establish and increase connections with that talent to educate them on opportunities and resources for research and employment in the United States, including overseas educational components to inform top STEM talent of nonimmigrant and immigrant visa options and potential expedited adjudication of their visa petitions and applications.

     (d)  Within 180 days of the date of this order, the Secretary of Homeland Security shall:

          (i)   review and initiate any policy changes the Secretary determines necessary and appropriate to clarify and modernize immigration pathways for experts in AI and other critical and emerging technologies, including O-1A and EB-1 noncitizens of extraordinary ability; EB-2 advanced-degree holders and noncitizens of exceptional ability; and startup founders in AI and other critical and emerging technologies using the International Entrepreneur Rule; and

          (ii)  continue its rulemaking process to modernize the H-1B program and enhance its integrity and usage, including by experts in AI and other critical and emerging technologies, and consider initiating a rulemaking to enhance the process for noncitizens, including experts in AI and other critical and emerging technologies and their spouses, dependents, and children, to adjust their status to lawful permanent resident.

     (e)  Within 45 days of the date of this order, for purposes of considering updates to the “Schedule A” list of occupations, 20 C.F.R. 656.5, the Secretary of Labor shall publish a request for information (RFI) to solicit public input, including from industry and worker-advocate communities, identifying AI and other STEM-related occupations, as well as additional occupations across the economy, for which there is an insufficient number of ready, willing, able, and qualified United States workers.

     (f)  The Secretary of State and the Secretary of Homeland Security shall, consistent with applicable law and implementing regulations, use their discretionary authorities to support and attract foreign nationals with special skills in AI and other critical and emerging technologies seeking to work, study, or conduct research in the United States.

     (g)  Within 120 days of the date of this order, the Secretary of Homeland Security, in consultation with the Secretary of State, the Secretary of Commerce, and the Director of OSTP, shall develop and publish informational resources to better attract and retain experts in AI and other critical and emerging technologies, including:

          (i)   a clear and comprehensive guide for experts in AI and other critical and emerging technologies to understand their options for working in the United States, to be published in multiple relevant languages on AI.gov; and

          (ii)  a public report with relevant data on applications, petitions, approvals, and other key indicators of how experts in AI and other critical and emerging technologies have utilized the immigration system through the end of Fiscal Year 2023.

     5.2.  Promoting Innovation.  (a)  To develop and strengthen public-private partnerships for advancing innovation, commercialization, and risk-mitigation methods for AI, and to help promote safe, responsible, fair, privacy-protecting, and trustworthy AI systems, the Director of NSF shall take the following steps:

          (i)    Within 90 days of the date of this order, in coordination with the heads of agencies that the Director of NSF deems appropriate, launch a pilot program implementing the National AI Research Resource (NAIRR), consistent with past recommendations of the NAIRR Task Force.  The program shall pursue the infrastructure, governance mechanisms, and user interfaces to pilot an initial integration of distributed computational, data, model, and training resources to be made available to the research community in support of AI-related research and development.  The Director of NSF shall identify Federal and private sector computational, data, software, and training resources appropriate for inclusion in the NAIRR pilot program.  To assist with such work, within 45 days of the date of this order, the heads of agencies whom the Director of NSF identifies for coordination pursuant to this subsection shall each submit to the Director of NSF a report identifying the agency resources that could be developed and integrated into such a pilot program.  These reports shall include a description of such resources, including their current status and availability; their format, structure, or technical specifications; associated agency expertise that will be provided; and the benefits and risks associated with their inclusion in the NAIRR pilot program.  The heads of independent regulatory agencies are encouraged to take similar steps, as they deem appropriate.

          (ii)   Within 150 days of the date of this order, fund and launch at least one NSF Regional Innovation Engine that prioritizes AI-related work, such as AI-related research, societal, or workforce needs.

          (iii)  Within 540 days of the date of this order, establish at least four new National AI Research Institutes, in addition to the 25 currently funded as of the date of this order. 

     (b)  Within 120 days of the date of this order, to support activities involving high-performance and data-intensive computing, the Secretary of Energy, in coordination with the Director of NSF, shall, in a manner consistent with applicable law and available appropriations, establish a pilot program to enhance existing successful training programs for scientists, with the goal of training 500 new researchers by 2025 capable of meeting the rising demand for AI talent.

     (c)  To promote innovation and clarify issues related to AI and inventorship of patentable subject matter, the Under Secretary of Commerce for Intellectual Property and Director of the United States Patent and Trademark Office (USPTO Director) shall:

          (i)    within 120 days of the date of this order, publish guidance to USPTO patent examiners and applicants addressing inventorship and the use of AI, including generative AI, in the inventive process, including illustrative examples in which AI systems play different roles in inventive processes and how, in each example, inventorship issues ought to be analyzed;

          (ii)   subsequently, within 270 days of the date of this order, issue additional guidance to USPTO patent examiners and applicants to address other considerations at the intersection of AI and IP, which could include, as the USPTO Director deems necessary, updated guidance on patent eligibility to address innovation in AI and critical and emerging technologies; and

          (iii)  within 270 days of the date of this order or 180 days after the United States Copyright Office of the Library of Congress publishes its forthcoming AI study that will address copyright issues raised by AI, whichever comes later, consult with the Director of the United States Copyright Office and issue recommendations to the President on potential executive actions relating to copyright and AI.  The recommendations shall address any copyright and related issues discussed in the United States Copyright Office’s study, including the scope of protection for works produced using AI and the treatment of copyrighted works in AI training.

     (d)  Within 180 days of the date of this order, to assist developers of AI in combatting AI-related IP risks, the Secretary of Homeland Security, acting through the Director of the National Intellectual Property Rights Coordination Center, and in consultation with the Attorney General, shall develop a training, analysis, and evaluation program to mitigate AI-related IP risks.  Such a program shall:

          (i)    include appropriate personnel dedicated to collecting and analyzing reports of AI-related IP theft, investigating such incidents with implications for national security, and, where appropriate and consistent with applicable law, pursuing related enforcement actions;

          (ii)   implement a policy of sharing information and coordinating on such work, as appropriate and consistent with applicable law, with the Federal Bureau of Investigation; United States Customs and Border Protection; other agencies; State and local agencies; and appropriate international organizations, including through work-sharing agreements;

          (iii)  develop guidance and other appropriate resources to assist private sector actors with mitigating the risks of AI-related IP theft;

          (iv)   share information and best practices with AI developers and law enforcement personnel to identify incidents, inform stakeholders of current legal requirements, and evaluate AI systems for IP law violations, as well as develop mitigation strategies and resources; and

          (v)    assist the Intellectual Property Enforcement Coordinator in updating the Intellectual Property Enforcement Coordinator Joint Strategic Plan on Intellectual Property Enforcement to address AI-related issues.

     (e)  To advance responsible AI innovation by a wide range of healthcare technology developers that promotes the welfare of patients and workers in the healthcare sector, the Secretary of HHS shall identify and, as appropriate and consistent with applicable law and the activities directed in section 8 of this order, prioritize grantmaking and other awards, as well as undertake related efforts, to support responsible AI development and use, including:

          (i)    collaborating with appropriate private sector actors through HHS programs that may support the advancement of AI-enabled tools that develop personalized immune-response profiles for patients, consistent with section 4 of this order;

          (ii)   prioritizing the allocation of 2024 Leading Edge Acceleration Project cooperative agreement awards to initiatives that explore ways to improve healthcare-data quality to support the responsible development of AI tools for clinical care, real-world-evidence programs, population health, public health, and related research; and

          (iii)  accelerating grants awarded through the National Institutes of Health Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity (AIM-AHEAD) program and showcasing current AIM-AHEAD activities in underserved communities.

     (f)  To advance the development of AI systems that improve the quality of veterans’ healthcare, and in order to support small businesses’ innovative capacity, the Secretary of Veterans Affairs shall:

          (i)   within 365 days of the date of this order, host two 3-month nationwide AI Tech Sprint competitions; and

          (ii)  as part of the AI Tech Sprint competitions and in collaboration with appropriate partners, provide participants access to technical assistance, mentorship opportunities, individualized expert feedback on products under development, potential contract opportunities, and other programming and resources.

     (g)  Within 180 days of the date of this order, to support the goal of strengthening our Nation’s resilience against climate change impacts and building an equitable clean energy economy for the future, the Secretary of Energy, in consultation with the Chair of the Federal Energy Regulatory Commission, the Director of OSTP, the Chair of the Council on Environmental Quality, the Assistant to the President and National Climate Advisor, and the heads of other relevant agencies as the Secretary of Energy may deem appropriate, shall:

          (i)    issue a public report describing the potential for AI to improve planning, permitting, investment, and operations for electric grid infrastructure and to enable the provision of clean, affordable, reliable, resilient, and secure electric power to all Americans;

          (ii)   develop tools that facilitate building foundation models useful for basic and applied science, including models that streamline permitting and environmental reviews while improving environmental and social outcomes;

          (iii)  collaborate, as appropriate, with private sector organizations and members of academia to support development of AI tools to mitigate climate change risks;

          (iv)   take steps to expand partnerships with industry, academia, other agencies, and international allies and partners to utilize the Department of Energy’s computing capabilities and AI testbeds to build foundation models that support new applications in science and energy, and for national security, including partnerships that increase community preparedness for climate-related risks, enable clean-energy deployment (including addressing delays in permitting reviews), and enhance grid reliability and resilience; and

          (v)    establish an office to coordinate development of AI and other critical and emerging technologies across Department of Energy programs and the 17 National Laboratories.

     (h)  Within 180 days of the date of this order, to understand AI’s implications for scientific research, the President’s Council of Advisors on Science and Technology shall submit to the President and make publicly available a report on the potential role of AI, especially given recent developments in AI, in research aimed at tackling major societal and global challenges.  The report shall include a discussion of issues that may hinder the effective use of AI in research and practices needed to ensure that AI is used responsibly for research.

     5.3.  Promoting Competition.  (a)  The head of each agency developing policies and regulations related to AI shall use their authorities, as appropriate and consistent with applicable law, to promote competition in AI and related technologies, as well as in other markets.  Such actions include addressing risks arising from concentrated control of key inputs, taking steps to stop unlawful collusion and prevent dominant firms from disadvantaging competitors, and working to provide new opportunities for small businesses and entrepreneurs.  In particular, the Federal Trade Commission is encouraged to consider, as it deems appropriate, whether to exercise the Commission’s existing authorities, including its rulemaking authority under the Federal Trade Commission Act, 15 U.S.C. 41 et seq., to ensure fair competition in the AI marketplace and to ensure that consumers and workers are protected from harms that may be enabled by the use of AI.

     (b)  To promote competition and innovation in the semiconductor industry, recognizing that semiconductors power AI technologies and that their availability is critical to AI competition, the Secretary of Commerce shall, in implementing division A of Public Law 117-167, known as the Creating Helpful Incentives to Produce Semiconductors (CHIPS) Act of 2022, promote competition by:

          (i)    implementing a flexible membership structure for the National Semiconductor Technology Center that attracts all parts of the semiconductor and microelectronics ecosystem, including startups and small firms;

          (ii)   implementing mentorship programs to increase interest and participation in the semiconductor industry, including from workers in underserved communities;

          (iii)  increasing, where appropriate and to the extent permitted by law, the availability of resources to startups and small businesses, including:

               (A)  funding for physical assets, such as specialty equipment or facilities, to which startups and small businesses may not otherwise have access;

               (B)  datasets — potentially including test and performance data — collected, aggregated, or shared by CHIPS research and development programs;

               (C)  workforce development programs;

               (D)  design and process technology, as well as IP, as appropriate; and

               (E)  other resources, including technical and intellectual property assistance, that could accelerate commercialization of new technologies by startups and small businesses, as appropriate; and

          (iv)   considering the inclusion, to the maximum extent possible, and as consistent with applicable law, of competition-increasing measures in notices of funding availability for commercial research-and-development facilities focused on semiconductors, including measures that increase access to facility capacity for startups or small firms developing semiconductors used to power AI technologies.

     (c)  To support small businesses innovating and commercializing AI, as well as in responsibly adopting and deploying AI, the Administrator of the Small Business Administration shall:

          (i)    prioritize the allocation of Regional Innovation Cluster program funding for clusters that support planning activities related to the establishment of one or more Small Business AI Innovation and Commercialization Institutes that provide support, technical assistance, and other resources to small businesses seeking to innovate, commercialize, scale, or otherwise advance the development of AI;

          (ii)   prioritize the allocation of up to $2 million in Growth Accelerator Fund Competition bonus prize funds for accelerators that support the incorporation or expansion of AI-related curricula, training, and technical assistance, or other AI-related resources within their programming; and

          (iii)  assess the extent to which the eligibility criteria of existing programs, including the State Trade Expansion Program, Technical and Business Assistance funding, and capital-access programs — such as the 7(a) loan program, 504 loan program, and Small Business Investment Company (SBIC) program — support appropriate expenses by small businesses related to the adoption of AI and, if feasible and appropriate, revise eligibility criteria to improve support for these expenses. 

     (d)  The Administrator of the Small Business Administration, in coordination with resource partners, shall conduct outreach regarding, and raise awareness of, opportunities for small businesses to use capital-access programs described in subsection 5.3(c) of this section for eligible AI-related purposes, and for eligible investment funds with AI-related expertise — particularly those seeking to serve or with experience serving underserved communities — to apply for an SBIC license.

     Sec. 6.  Supporting Workers.(a)  To advance the Government’s understanding of AI’s implications for workers, the following actions shall be taken within 180 days of the date of this order:

          (i)   The Chairman of the Council of Economic Advisers shall prepare and submit a report to the President on the labor-market effects of AI.

          (ii)  To evaluate necessary steps for the Federal Government to address AI-related workforce disruptions, the Secretary of Labor shall submit to the President a report analyzing the abilities of agencies to support workers displaced by the adoption of AI and other technological advancements.  The report shall, at a minimum:

               (A)  assess how current or formerly operational Federal programs designed to assist workers facing job disruptions — including unemployment insurance and programs authorized by the Workforce Innovation and Opportunity Act (Public Law 113-128) — could be used to respond to possible future AI-related disruptions; and

               (B)  identify options, including potential legislative measures, to strengthen or develop additional Federal support for workers displaced by AI and, in consultation with the Secretary of Commerce and the Secretary of Education, strengthen and expand education and training opportunities that provide individuals pathways to occupations related to AI.

     (b)  To help ensure that AI deployed in the workplace advances employees’ well-being:

          (i)    The Secretary of Labor shall, within 180 days of the date of this order and in consultation with other agencies and with outside entities, including labor unions and workers, as the Secretary of Labor deems appropriate, develop and publish principles and best practices for employers that could be used to mitigate AI’s potential harms to employees’ well-being and maximize its potential benefits.  The principles and best practices shall include specific steps for employers to take with regard to AI, and shall cover, at a minimum:

               (A)  job-displacement risks and career opportunities related to AI, including effects on job skills and evaluation of applicants and workers;

               (B)  labor standards and job quality, including issues related to the equity, protected-activity, compensation, health, and safety implications of AI in the workplace; and

               (C)  implications for workers of employers’ AI-related collection and use of data about them, including transparency, engagement, management, and activity protected under worker-protection laws.

          (ii)   After principles and best practices are developed pursuant to subsection (b)(i) of this section, the heads of agencies shall consider, in consultation with the Secretary of Labor, encouraging the adoption of these guidelines in their programs to the extent appropriate for each program and consistent with applicable law.

          (iii)  To support employees whose work is monitored or augmented by AI in being compensated appropriately for all of their work time, the Secretary of Labor shall issue guidance to make clear that employers that deploy AI to monitor or augment employees’ work must continue to comply with protections that ensure that workers are compensated for their hours worked, as defined under the Fair Labor Standards Act of 1938, 29 U.S.C. 201 et seq., and other legal requirements.

     (c)  To foster a diverse AI-ready workforce, the Director of NSF shall prioritize available resources to support AI-related education and AI-related workforce development through existing programs.  The Director shall additionally consult with agencies, as appropriate, to identify further opportunities for agencies to allocate resources for those purposes.  The actions by the Director shall use appropriate fellowship programs and awards for these purposes.

     Sec. 7.  Advancing Equity and Civil Rights.

     7.1.  Strengthening AI and Civil Rights in the Criminal Justice System.  (a)  To address unlawful discrimination and other harms that may be exacerbated by AI, the Attorney General shall:

          (i)    consistent with Executive Order 12250 of November 2, 1980 (Leadership and Coordination of Nondiscrimination Laws), Executive Order 14091, and 28 C.F.R. 0.50-51, coordinate with and support agencies in their implementation and enforcement of existing Federal laws to address civil rights and civil liberties violations and discrimination related to AI; 

          (ii)   direct the Assistant Attorney General in charge of the Civil Rights Division to convene, within 90 days of the date of this order, a meeting of the heads of Federal civil rights offices — for which meeting the heads of civil rights offices within independent regulatory agencies will be encouraged to join — to discuss comprehensive use of their respective authorities and offices to:  prevent and address discrimination in the use of automated systems, including algorithmic discrimination; increase coordination between the Department of Justice’s Civil Rights Division and Federal civil rights offices concerning issues related to AI and algorithmic discrimination; improve external stakeholder engagement to promote public awareness of potential discriminatory uses and effects of AI; and develop, as appropriate, additional training, technical assistance, guidance, or other resources; and  

          (iii)  consider providing, as appropriate and consistent with applicable law, guidance, technical assistance, and training to State, local, Tribal, and territorial investigators and prosecutors on best practices for investigating and prosecuting civil rights violations and discrimination related to automated systems, including AI.

     (b)  To promote the equitable treatment of individuals and adhere to the Federal Government’s fundamental obligation to ensure fair and impartial justice for all, with respect to the use of AI in the criminal justice system, the Attorney General shall, in consultation with the Secretary of Homeland Security and the Director of OSTP:

          (i)    within 365 days of the date of this order, submit to the President a report that addresses the use of AI in the criminal justice system, including any use in:

               (A)  sentencing;

               (B)  parole, supervised release, and probation;

               (C)  bail, pretrial release, and pretrial detention;

               (D)  risk assessments, including pretrial, earned time, and early release or transfer to home-confinement determinations;

               (E)  police surveillance;

               (F)  crime forecasting and predictive policing, including the ingestion of historical crime data into AI systems to predict high-density “hot spots”;

               (G)  prison-management tools; and

               (H)  forensic analysis;  

          (ii)   within the report set forth in subsection 7.1(b)(i) of this section:

               (A)  identify areas where AI can enhance law enforcement efficiency and accuracy, consistent with protections for privacy, civil rights, and civil liberties; and

               (B)  recommend best practices for law enforcement agencies, including safeguards and appropriate use limits for AI, to address the concerns set forth in section 13(e)(i) of Executive Order 14074 as well as the best practices and the guidelines set forth in section 13(e)(iii) of Executive Order 14074; and  

          (iii)  supplement the report set forth in subsection 7.1(b)(i) of this section as appropriate with recommendations to the President, including with respect to requests for necessary legislation.  

     (c)  To advance the presence of relevant technical experts and expertise (such as machine-learning engineers, software and infrastructure engineering, data privacy experts, data scientists, and user experience researchers) among law enforcement professionals:

          (i)    The interagency working group created pursuant to section 3 of Executive Order 14074 shall, within 180 days of the date of this order, identify and share best practices for recruiting and hiring law enforcement professionals who have the technical skills mentioned in subsection 7.1(c) of this section, and for training law enforcement professionals about responsible application of AI.

          (ii)   Within 270 days of the date of this order, the Attorney General shall, in consultation with the Secretary of Homeland Security, consider those best practices and the guidance developed under section 3(d) of Executive Order 14074 and, if necessary, develop additional general recommendations for State, local, Tribal, and territorial law enforcement agencies and criminal justice agencies seeking to recruit, hire, train, promote, and retain highly qualified and service-oriented officers and staff with relevant technical knowledge.  In considering this guidance, the Attorney General shall consult with State, local, Tribal, and territorial law enforcement agencies, as appropriate.

          (iii)  Within 365 days of the date of this order, the Attorney General shall review the work conducted pursuant to section 2(b) of Executive Order 14074 and, if appropriate, reassess the existing capacity to investigate law enforcement deprivation of rights under color of law resulting from the use of AI, including through improving and increasing training of Federal law enforcement officers, their supervisors, and Federal prosecutors on how to investigate and prosecute cases related to AI involving the deprivation of rights under color of law pursuant to 18 U.S.C. 242. 

     7.2.  Protecting Civil Rights Related to Government Benefits and Programs.  (a)  To advance equity and civil rights, consistent with the directives of Executive Order 14091, and in addition to complying with the guidance on Federal Government use of AI issued pursuant to section 10.1(b) of this order, agencies shall use their respective civil rights and civil liberties offices and authorities — as appropriate and consistent with applicable law — to prevent and address unlawful discrimination and other harms that result from uses of AI in Federal Government programs and benefits administration.  This directive does not apply to agencies’ civil or criminal enforcement authorities.  Agencies shall consider opportunities to ensure that their respective civil rights and civil liberties offices are appropriately consulted on agency decisions regarding the design, development, acquisition, and use of AI in Federal Government programs and benefits administration.  To further these objectives, agencies shall also consider opportunities to increase coordination, communication, and engagement about AI as appropriate with community-based organizations; civil-rights and civil-liberties organizations; academic institutions; industry; State, local, Tribal, and territorial governments; and other stakeholders.  

     (b)  To promote equitable administration of public benefits:

          (i)   The Secretary of HHS shall, within 180 days of the date of this order and in consultation with relevant agencies, publish a plan, informed by the guidance issued pursuant to section 10.1(b) of this order, addressing the use of automated or algorithmic systems in the implementation by States and localities of public benefits and services administered by the Secretary, such as to promote:  assessment of access to benefits by qualified recipients; notice to recipients about the presence of such systems; regular evaluation to detect unjust denials; processes to retain appropriate levels of discretion of expert agency staff; processes to appeal denials to human reviewers; and analysis of whether algorithmic systems in use by benefit programs achieve equitable and just outcomes.

          (ii)  The Secretary of Agriculture shall, within 180 days of the date of this order and as informed by the guidance issued pursuant to section 10.1(b) of this order, issue guidance to State, local, Tribal, and territorial public-benefits administrators on the use of automated or algorithmic systems in implementing benefits or in providing customer support for benefit programs administered by the Secretary, to ensure that programs using those systems:

               (A)  maximize program access for eligible recipients;

               (B)  employ automated or algorithmic systems in a manner consistent with any requirements for using merit systems personnel in public-benefits programs;

               (C)  identify instances in which reliance on automated or algorithmic systems would require notification by the State, local, Tribal, or territorial government to the Secretary;

               (D)  identify instances when applicants and participants can appeal benefit determinations to a human reviewer for reconsideration and can receive other customer support from a human being;

               (E)  enable auditing and, if necessary, remediation of the logic used to arrive at an individual decision or determination to facilitate the evaluation of appeals; and

               (F)  enable the analysis of whether algorithmic systems in use by benefit programs achieve equitable outcomes.

     7.3.  Strengthening AI and Civil Rights in the Broader Economy.  (a)  Within 365 days of the date of this order, to prevent unlawful discrimination from AI used for hiring, the Secretary of Labor shall publish guidance for Federal contractors regarding nondiscrimination in hiring involving AI and other technology-based hiring systems.

     (b)  To address discrimination and biases against protected groups in housing markets and consumer financial markets, the Director of the Federal Housing Finance Agency and the Director of the Consumer Financial Protection Bureau are encouraged to consider using their authorities, as they deem appropriate, to require their respective regulated entities, where possible, to use appropriate methodologies including AI tools to ensure compliance with Federal law and:

          (i)   evaluate their underwriting models for bias or disparities affecting protected groups; and

          (ii)  evaluate automated collateral-valuation and appraisal processes in ways that minimize bias.

     (c)  Within 180 days of the date of this order, to combat unlawful discrimination enabled by automated or algorithmic tools used to make decisions about access to housing and in other real estate-related transactions, the Secretary of Housing and Urban Development shall, and the Director of the Consumer Financial Protection Bureau is encouraged to, issue additional guidance:

          (i)   addressing the use of tenant screening systems in ways that may violate the Fair Housing Act (Public Law 90-284), the Fair Credit Reporting Act (Public Law 91-508), or other relevant Federal laws, including how the use of data, such as criminal records, eviction records, and credit information, can lead to discriminatory outcomes in violation of Federal law; and

          (ii)  addressing how the Fair Housing Act, the Consumer Financial Protection Act of 2010 (title X of Public Law 111-203), or the Equal Credit Opportunity Act (Public Law 93-495) apply to the advertising of housing, credit, and other real estate-related transactions through digital platforms, including those that use algorithms to facilitate advertising delivery, as well as on best practices to avoid violations of Federal law.

     (d)  To help ensure that people with disabilities benefit from AI’s promise while being protected from its risks, including unequal treatment from the use of biometric data like gaze direction, eye tracking, gait analysis, and hand motions, the Architectural and Transportation Barriers Compliance Board is encouraged, as it deems appropriate, to solicit public participation and conduct community engagement; to issue technical assistance and recommendations on the risks and benefits of AI in using biometric data as an input; and to provide people with disabilities access to information and communication technology and transportation services.

     Sec. 8.  Protecting Consumers, Patients, Passengers, and Students.  (a)  Independent regulatory agencies are encouraged, as they deem appropriate, to consider using their full range of authorities to protect American consumers from fraud, discrimination, and threats to privacy and to address other risks that may arise from the use of AI, including risks to financial stability, and to consider rulemaking, as well as emphasizing or clarifying where existing regulations and guidance apply to AI, including clarifying the responsibility of regulated entities to conduct due diligence on and monitor any third-party AI services they use, and emphasizing or clarifying requirements and expectations related to the transparency of AI models and regulated entities’ ability to explain their use of AI models.

     (b)  To help ensure the safe, responsible deployment and use of AI in the healthcare, public-health, and human-services sectors:

          (i)    Within 90 days of the date of this order, the Secretary of HHS shall, in consultation with the Secretary of Defense and the Secretary of Veterans Affairs, establish an HHS AI Task Force that shall, within 365 days of its creation, develop a strategic plan that includes policies and frameworks — possibly including regulatory action, as appropriate — on responsible deployment and use of AI and AI-enabled technologies in the health and human services sector (including research and discovery, drug and device safety, healthcare delivery and financing, and public health), and identify appropriate guidance and
resources to promote that deployment, including in the following areas:

               (A)  development, maintenance, and use of predictive and generative AI-enabled technologies in healthcare delivery and financing — including quality measurement, performance improvement, program integrity, benefits administration, and patient experience — taking into account considerations such as appropriate human oversight of the application of AI-generated output;

               (B)  long-term safety and real-world performance monitoring of AI-enabled technologies in the health and human services sector, including clinically relevant or significant modifications and performance across population groups, with a means to communicate product updates to regulators, developers, and users; 

               (C)  incorporation of equity principles in AI-enabled technologies used in the health and human services sector, using disaggregated data on affected populations and representative population data sets when developing new models, monitoring algorithmic performance against discrimination and bias in existing models, and helping to identify and mitigate discrimination and bias in current systems; 

               (D)  incorporation of safety, privacy, and security standards into the software-development lifecycle for protection of personally identifiable information, including measures to address AI-enhanced cybersecurity threats in the health and human services sector;

               (E)  development, maintenance, and availability of documentation to help users determine appropriate and safe uses of AI in local settings in the health and human services sector;

               (F)  work to be done with State, local, Tribal, and territorial health and human services agencies to advance positive use cases and best practices for use of AI in local settings; and

               (G)  identification of uses of AI to promote workplace efficiency and satisfaction in the health and human services sector, including reducing administrative burdens.

          (ii)   Within 180 days of the date of this order, the Secretary of HHS shall direct HHS components, as the Secretary of HHS deems appropriate, to develop a strategy, in consultation with relevant agencies, to determine whether AI-enabled technologies in the health and human services sector maintain appropriate levels of quality, including, as appropriate, in the areas described in subsection (b)(i) of this section.  This work shall include the development of AI assurance policy — to evaluate important aspects of the performance of AI-enabled healthcare tools — and infrastructure needs for enabling pre-market assessment and post-market oversight of AI-enabled healthcare-technology algorithmic system performance against real-world data.

          (iii)  Within 180 days of the date of this order, the Secretary of HHS shall, in consultation with relevant agencies as the Secretary of HHS deems appropriate, consider appropriate actions to advance the prompt understanding of, and compliance with, Federal nondiscrimination laws by health and human services providers that receive Federal financial assistance, as well as how those laws relate to AI.  Such actions may include:

               (A)  convening and providing technical assistance to health and human services providers and payers about their obligations under Federal nondiscrimination and privacy laws as they relate to AI and the potential consequences of noncompliance; and

               (B)  issuing guidance, or taking other action as appropriate, in response to any complaints or other reports of noncompliance with Federal nondiscrimination and privacy laws as they relate to AI.

          (iv)   Within 365 days of the date of this order, the Secretary of HHS shall, in consultation with the Secretary of Defense and the Secretary of Veterans Affairs, establish an AI safety program that, in partnership with voluntary federally listed Patient Safety Organizations:

               (A)  establishes a common framework for approaches to identifying and capturing clinical errors resulting from AI deployed in healthcare settings as well as specifications for a central tracking repository for associated incidents that cause harm, including through bias or discrimination, to patients, caregivers, or other parties; 

               (B)  analyzes captured data and generated evidence to develop, wherever appropriate, recommendations, best practices, or other informal guidelines aimed at avoiding these harms; and

               (C)  disseminates those recommendations, best practices, or other informal guidance to appropriate stakeholders, including healthcare providers.

          (v)    Within 365 days of the date of this order, the Secretary of HHS shall develop a strategy for regulating the use of AI or AI-enabled tools in drug-development processes.  The strategy shall, at a minimum:

               (A)  define the objectives, goals, and high-level principles required for appropriate regulation throughout each phase of drug development;

               (B)  identify areas where future rulemaking, guidance, or additional statutory authority may be necessary to implement such a regulatory system;

               (C)  identify the existing budget, resources, personnel, and potential for new public/private partnerships necessary for such a regulatory system; and

               (D)  consider risks identified by the actions undertaken to implement section 4 of this order.

     (c)  To promote the safe and responsible development and use of AI in the transportation sector, in consultation with relevant agencies:

          (i)    Within 30 days of the date of this order, the Secretary of Transportation shall direct the Nontraditional and Emerging Transportation Technology (NETT) Council to assess the need for information, technical assistance, and guidance regarding the use of AI in transportation.  The Secretary of Transportation shall further direct the NETT Council, as part of any such efforts, to:

               (A)  support existing and future initiatives to pilot transportation-related applications of AI, as they align with policy priorities articulated in the Department of Transportation’s (DOT) Innovation Principles, including, as appropriate, through technical assistance and connecting stakeholders;

               (B)  evaluate the outcomes of such pilot programs in order to assess when DOT, or other Federal or State agencies, have sufficient information to take regulatory actions, as appropriate, and recommend appropriate actions when that information is available; and

               (C)  establish a new DOT Cross-Modal Executive Working Group, which will consist of members from different divisions of DOT and coordinate applicable work among these divisions, to solicit and use relevant input from appropriate stakeholders.

          (ii)   Within 90 days of the date of this order, the Secretary of Transportation shall direct appropriate Federal Advisory Committees of the DOT to provide advice on the safe and responsible use of AI in transportation.  The committees shall include the Advanced Aviation Advisory Committee, the Transforming Transportation Advisory Committee, and the Intelligent Transportation Systems Program Advisory Committee.

          (iii)  Within 180 days of the date of this order, the Secretary of Transportation shall direct the Advanced Research Projects Agency-Infrastructure (ARPA-I) to explore the transportation-related opportunities and challenges of AI — including regarding software-defined AI enhancements impacting autonomous mobility ecosystems.  The Secretary of Transportation shall further encourage ARPA-I to prioritize the allocation of grants to those opportunities, as appropriate.  The work tasked to ARPA-I shall include soliciting input on these topics through a public consultation process, such as an RFI.

     (d)  To help ensure the responsible development and deployment of AI in the education sector, the Secretary of Education shall, within 365 days of the date of this order, develop resources, policies, and guidance regarding AI.  These resources shall address safe, responsible, and nondiscriminatory uses of AI in education, including the impact AI systems have on vulnerable and underserved communities, and shall be developed in consultation with stakeholders as appropriate.  They shall also include the development of an “AI toolkit” for education leaders implementing recommendations from the Department of Education’s AI and the Future of Teaching and Learning report, including appropriate human review of AI decisions, designing AI systems to enhance trust and safety and align with privacy-related laws and regulations in the educational context, and developing education-specific guardrails.

     (e)  The Federal Communications Commission is encouraged to consider actions related to how AI will affect communications networks and consumers, including by:

          (i)    examining the potential for AI to improve spectrum management, increase the efficiency of non-Federal spectrum usage, and expand opportunities for the sharing of non-Federal spectrum;

          (ii)   coordinating with the National Telecommunications and Information Administration to create opportunities for sharing spectrum between Federal and non-Federal spectrum operations;

          (iii)  providing support for efforts to improve network security, resiliency, and interoperability using next-generation technologies that incorporate AI, including self-healing networks, 6G, and Open RAN; and

          (iv)   encouraging, including through rulemaking, efforts to combat unwanted robocalls and robotexts that are facilitated or exacerbated by AI and to deploy AI technologies that better serve consumers by blocking unwanted robocalls and robotexts.

     Sec. 9.  Protecting Privacy.  (a)  To mitigate privacy risks potentially exacerbated by AI — including by AI’s facilitation of the collection or use of information about individuals, or the making of inferences about individuals — the Director of OMB shall:

          (i)    evaluate and take steps to identify commercially available information (CAI) procured by agencies, particularly CAI that contains personally identifiable information and including CAI procured from data brokers and CAI procured and processed indirectly through vendors, in appropriate agency inventory and reporting processes (other than when it is used for the purposes of national security);

          (ii)   evaluate, in consultation with the Federal Privacy Council and the Interagency Council on Statistical Policy, agency standards and procedures associated with the collection, processing, maintenance, use, sharing, dissemination, and disposition of CAI that contains personally identifiable information (other than when it is used for the purposes of national security) to inform potential guidance to agencies on ways to mitigate privacy and confidentiality risks from agencies’ activities related to CAI;

          (iii)  within 180 days of the date of this order, in consultation with the Attorney General, the Assistant to the President for Economic Policy, and the Director of OSTP, issue an RFI to inform potential revisions to guidance to agencies on implementing the privacy provisions of the E-Government Act of 2002 (Public Law 107-347).  The RFI shall seek feedback regarding how privacy impact assessments may be more effective at mitigating privacy risks, including those that are further exacerbated by AI; and

          (iv)   take such steps as are necessary and appropriate, consistent with applicable law, to support and advance the near-term actions and long-term strategy identified through the RFI process, including issuing new or updated guidance or RFIs or consulting other agencies or the Federal Privacy Council.

     (b)  Within 365 days of the date of this order, to better enable agencies to use PETs to safeguard Americans’ privacy from the potential threats exacerbated by AI, the Secretary of Commerce, acting through the Director of NIST, shall create guidelines for agencies to evaluate the efficacy of differential-privacy-guarantee protections, including for AI.  The guidelines shall, at a minimum, describe the significant factors that bear on differential-privacy safeguards and common risks to realizing differential privacy in practice.

     (c)  To advance research, development, and implementation related to PETs:

          (i)    Within 120 days of the date of this order, the Director of NSF, in collaboration with the Secretary of Energy, shall fund the creation of a Research Coordination Network (RCN) dedicated to advancing privacy research and, in particular, the development, deployment, and scaling of PETs.  The RCN shall serve to enable privacy researchers to share information, coordinate and collaborate in research, and develop standards for the privacy-research community.  

          (ii)   Within 240 days of the date of this order, the Director of NSF shall engage with agencies to identify ongoing work and potential opportunities to incorporate PETs into their operations.  The Director of NSF shall, where feasible and appropriate, prioritize research — including efforts to translate research discoveries into practical applications — that encourage the adoption of leading-edge PETs solutions for agencies’ use, including through research engagement through the RCN described in subsection (c)(i) of this section.

          (iii)  The Director of NSF shall use the results of the United States-United Kingdom PETs Prize Challenge to inform the approaches taken, and opportunities identified, for PETs research and adoption.

     Sec. 10.  Advancing Federal Government Use of AI.

     10.1.  Providing Guidance for AI Management.  (a)  To coordinate the use of AI across the Federal Government, within 60 days of the date of this order and on an ongoing basis as necessary, the Director of OMB shall convene and chair an interagency council to coordinate the development and use of AI in agencies’ programs and operations, other than the use of AI in national security systems.  The Director of OSTP shall serve as Vice Chair for the interagency council.  The interagency council’s membership shall include, at minimum, the heads of the agencies identified in 31 U.S.C. 901(b), the Director of National Intelligence, and other agencies as identified by the Chair.  Until agencies designate their permanent Chief AI Officers consistent with the guidance described in subsection 10.1(b) of this section, they shall be represented on the interagency council by an appropriate official at the Assistant Secretary level or equivalent, as determined by the head of each agency.  

     (b)  To provide guidance on Federal Government use of AI, within 150 days of the date of this order and updated periodically thereafter, the Director of OMB, in coordination with the Director of OSTP, and in consultation with the interagency council established in subsection 10.1(a) of this section, shall issue guidance to agencies to strengthen the effective and appropriate use of AI, advance AI innovation, and manage risks from AI in the Federal Government.  The Director of OMB’s guidance shall specify, to the extent appropriate and consistent with applicable law:

          (i)     the requirement to designate at each agency within 60 days of the issuance of the guidance a Chief Artificial Intelligence Officer who shall hold primary responsibility in their agency, in coordination with other responsible officials, for coordinating their agency’s use of AI, promoting AI innovation in their agency, managing risks from their agency’s use of AI, and carrying out the responsibilities described in section 8(c) of Executive Order 13960 of December 3, 2020 (Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government), and section 4(b) of Executive Order 14091;

          (ii)    the Chief Artificial Intelligence Officers’ roles, responsibilities, seniority, position, and reporting structures;

          (iii)   for the agencies identified in 31 U.S.C. 901(b), the creation of internal Artificial Intelligence Governance Boards, or other appropriate mechanisms, at each agency within 60 days of the issuance of the guidance to coordinate and govern AI issues through relevant senior leaders from across the agency;

          (iv)    required minimum risk-management practices for Government uses of AI that impact people’s rights or safety, including, where appropriate, the following practices derived from OSTP’s Blueprint for an AI Bill of Rights and the NIST AI Risk Management Framework:  conducting public consultation; assessing data quality; assessing and mitigating disparate impacts and algorithmic discrimination; providing notice of the use of AI; continuously monitoring and evaluating deployed AI; and granting human consideration and remedies for adverse decisions made using AI;

          (v)     specific Federal Government uses of AI that are presumed by default to impact rights or safety;

          (vi)    recommendations to agencies to reduce barriers to the responsible use of AI, including barriers related to information technology infrastructure, data, workforce, budgetary restrictions, and cybersecurity processes; 

          (vii)   requirements that agencies identified in 31 U.S.C. 901(b) develop AI strategies and pursue high-impact AI use cases;

          (viii)  in consultation with the Secretary of Commerce, the Secretary of Homeland Security, and the heads of other appropriate agencies as determined by the Director of OMB, recommendations to agencies regarding:

               (A)  external testing for AI, including AI red-teaming for generative AI, to be developed in coordination with the Cybersecurity and Infrastructure Security Agency;

               (B)  testing and safeguards against discriminatory, misleading, inflammatory, unsafe, or deceptive outputs, as well as against producing child sexual abuse material and against producing non-consensual intimate imagery of real individuals (including intimate digital depictions of the body or body parts of an identifiable individual), for generative AI;

               (C)  reasonable steps to watermark or otherwise label output from generative AI;

               (D)  application of the mandatory minimum risk-management practices defined under subsection 10.1(b)(iv) of this section to procured AI;

               (E)  independent evaluation of vendors’ claims concerning both the effectiveness and risk mitigation of their AI offerings;

               (F)  documentation and oversight of procured AI;

               (G)  maximizing the value to agencies when relying on contractors to use and enrich Federal Government data for the purposes of AI development and operation;

               (H)  provision of incentives for the continuous improvement of procured AI; and

               (I)  training on AI in accordance with the principles set out in this order and in other references related to AI listed herein; and

          (ix)    requirements for public reporting on compliance with this guidance.

     (c)  To track agencies’ AI progress, within 60 days of the issuance of the guidance established in subsection 10.1(b) of this section and updated periodically thereafter, the Director of OMB shall develop a method for agencies to track and assess their ability to adopt AI into their programs and operations, manage its risks, and comply with Federal policy on AI.  This method should draw on existing related efforts as appropriate and should address, as appropriate and consistent with applicable law, the practices, processes, and capabilities necessary for responsible AI adoption, training, and governance across, at a minimum, the areas of information technology infrastructure, data, workforce, leadership, and risk management.  

     (d)  To assist agencies in implementing the guidance to be established in subsection 10.1(b) of this section:

          (i)   within 90 days of the issuance of the guidance, the Secretary of Commerce, acting through the Director of NIST, and in coordination with the Director of OMB and the Director of OSTP, shall develop guidelines, tools, and practices to support implementation of the minimum risk-management practices described in subsection 10.1(b)(iv) of this section; and

          (ii)  within 180 days of the issuance of the guidance, the Director of OMB shall develop an initial means to ensure that agency contracts for the acquisition of AI systems and services align with the guidance described in subsection 10.1(b) of this section and advance the other aims identified in section 7224(d)(1) of the Advancing American AI Act (Public Law 117-263, div. G, title LXXII, subtitle B). 

     (e)  To improve transparency for agencies’ use of AI, the Director of OMB shall, on an annual basis, issue instructions to agencies for the collection, reporting, and publication of agency AI use cases, pursuant to section 7225(a) of the Advancing American AI Act.  Through these instructions, the Director shall, as appropriate, expand agencies’ reporting on how they are managing risks from their AI use cases and update or replace the guidance originally established in section 5 of Executive Order 13960.

     (f)  To advance the responsible and secure use of generative AI in the Federal Government:

          (i)    As generative AI products become widely available and common in online platforms, agencies are discouraged from imposing broad general bans or blocks on agency use of generative AI.  Agencies should instead limit access, as necessary, to specific generative AI services based on specific risk assessments; establish guidelines and limitations on the appropriate use of generative AI; and, with appropriate safeguards in place, provide their personnel and programs with access to secure and reliable generative AI capabilities, at least for the purposes of experimentation and routine tasks that carry a low risk of impacting Americans’ rights.  To protect Federal Government information, agencies are also encouraged to employ risk-management practices, such as training their staff on proper use, protection, dissemination, and disposition of Federal information; negotiating appropriate terms of service with vendors; implementing measures designed to ensure compliance with record-keeping, cybersecurity, confidentiality, privacy, and data protection requirements; and deploying other measures to prevent misuse of Federal Government information in generative AI. 

          (ii)   Within 90 days of the date of this order, the Administrator of General Services, in coordination with the Director of OMB, and in consultation with the Federal Secure Cloud Advisory Committee and other relevant agencies as the Administrator of General Services may deem appropriate, shall develop and issue a framework for prioritizing critical and emerging technologies offerings in the Federal Risk and Authorization Management Program authorization process, starting with generative AI offerings that have the primary purpose of providing large language model-based chat interfaces, code-generation and debugging tools, and associated application programming interfaces, as well as prompt-based image generators.  This framework shall apply for no less than 2 years from the date of its issuance.  Agency Chief Information Officers, Chief Information Security Officers, and authorizing officials are also encouraged to prioritize generative AI and other critical and emerging technologies in granting authorities for agency operation of information technology systems and any other applicable release or oversight processes, using continuous authorizations and approvals wherever feasible.

          (iii)  Within 180 days of the date of this order, the Director of the Office of Personnel Management (OPM), in coordination with the Director of OMB, shall develop guidance on the use of generative AI for work by the Federal workforce.

     (g)  Within 30 days of the date of this order, to increase agency investment in AI, the Technology Modernization Board shall consider, as it deems appropriate and consistent with applicable law, prioritizing funding for AI projects for the Technology Modernization Fund for a period of at least 1 year.  Agencies are encouraged to submit to the Technology Modernization Fund project funding proposals that include AI — and particularly generative AI — in service of mission delivery.

     (h)  Within 180 days of the date of this order, to facilitate agencies’ access to commercial AI capabilities, the Administrator of General Services, in coordination with the Director of OMB, and in collaboration with the Secretary of Defense, the Secretary of Homeland Security, the Director of National Intelligence, the Administrator of the National Aeronautics and Space Administration, and the head of any other agency identified by the Administrator of General Services, shall take steps consistent with applicable law to facilitate access to Federal Government-wide acquisition solutions for specified types of AI services and products, such as through the creation of a resource guide or other tools to assist the acquisition workforce.  Specified types of AI capabilities shall include generative AI and specialized computing infrastructure.

     (i)  The initial means, instructions, and guidance issued pursuant to subsections 10.1(a)-(h) of this section shall not apply to AI when it is used as a component of a national security system, which shall be addressed by the proposed National Security Memorandum described in subsection 4.8 of this order. 

     10.2.  Increasing AI Talent in Government.  (a)  Within 45 days of the date of this order, to plan a national surge in AI talent in the Federal Government, the Director of OSTP and the Director of OMB, in consultation with the Assistant to the President for National Security Affairs, the Assistant to the President for Economic Policy, the Assistant to the President and Domestic Policy Advisor, and the Assistant to the President and Director of the Gender Policy Council, shall identify priority mission areas for increased Federal Government AI talent, the types of talent that are highest priority to recruit and develop to ensure adequate implementation of this order and use of relevant enforcement and regulatory authorities to address AI risks, and accelerated hiring pathways.

     (b)  Within 45 days of the date of this order, to coordinate rapid advances in the capacity of the Federal AI workforce, the Assistant to the President and Deputy Chief of Staff for Policy, in coordination with the Director of OSTP and the Director of OMB, and in consultation with the National Cyber Director, shall convene an AI and Technology Talent Task Force, which shall include the Director of OPM, the Director of the General Services Administration’s Technology Transformation Services, a representative from the Chief Human Capital Officers Council, the Assistant to the President for Presidential Personnel, members of appropriate agency technology talent programs, a representative of the Chief Data Officer Council, and a representative of the interagency council convened under subsection 10.1(a) of this section.  The Task Force’s purpose shall be to accelerate and track the hiring of AI and AI-enabling talent across the Federal Government, including through the following actions:

          (i)    within 180 days of the date of this order, tracking and reporting progress to the President on increasing AI capacity across the Federal Government, including submitting to the President a report and recommendations for further increasing capacity; 

          (ii)   identifying and circulating best practices for agencies to attract, hire, retain, train, and empower AI talent, including diversity, inclusion, and accessibility best practices, as well as to plan and budget adequately for AI workforce needs;

          (iii)  coordinating, in consultation with the Director of OPM, the use of fellowship programs and agency technology-talent programs and human-capital teams to build hiring capabilities, execute hires, and place AI talent to fill staffing gaps; and

          (iv)   convening a cross-agency forum for ongoing collaboration between AI professionals to share best practices and improve retention.

     (c)  Within 45 days of the date of this order, to advance existing Federal technology talent programs, the United States Digital Service, Presidential Innovation Fellowship, United States Digital Corps, OPM, and technology talent programs at agencies, with support from the AI and Technology Talent Task Force described in subsection 10.2(b) of this section, as appropriate and permitted by law, shall develop and begin to implement plans to support the rapid recruitment of individuals as part of a Federal Government-wide AI talent surge to accelerate the placement of key AI and AI-enabling talent in high-priority areas and to advance agencies’ data and technology strategies.

     (d)  To meet the critical hiring need for qualified personnel to execute the initiatives in this order, and to improve Federal hiring practices for AI talent, the Director of OPM, in consultation with the Director of OMB, shall:

          (i)     within 60 days of the date of this order, conduct an evidence-based review on the need for hiring and workplace flexibility, including Federal Government-wide direct-hire authority for AI and related data-science and technical roles, and, where the Director of OPM finds such authority is appropriate, grant it; this review shall include the following job series at all General Schedule (GS) levels:  IT Specialist (2210), Computer Scientist (1550), Computer Engineer (0854), and Program Analyst (0343) focused on AI, and any subsequently developed job series derived from these job series;

          (ii)    within 60 days of the date of this order, consider authorizing the use of excepted service appointments under 5 C.F.R. 213.3102(i)(3) to address the need for hiring additional staff to implement directives of this order;

          (iii)   within 90 days of the date of this order, coordinate a pooled-hiring action informed by subject-matter experts and using skills-based assessments to support the recruitment of AI talent across agencies;

          (iv)    within 120 days of the date of this order, as appropriate and permitted by law, issue guidance for agency application of existing pay flexibilities or incentive pay programs for AI, AI-enabling, and other key technical positions to facilitate appropriate use of current pay incentives;

          (v)     within 180 days of the date of this order, establish guidance and policy on skills-based, Federal Government-wide hiring of AI, data, and technology talent in order to increase access to those with nontraditional academic backgrounds to Federal AI, data, and technology roles; 

          (vi)    within 180 days of the date of this order, establish an interagency working group, staffed with both human-resources professionals and recruiting technical experts, to facilitate Federal Government-wide hiring of people with AI and other technical skills;

          (vii)   within 180 days of the date of this order, review existing Executive Core Qualifications (ECQs) for Senior Executive Service (SES) positions informed by data and AI literacy competencies and, within 365 days of the date of this order, implement new ECQs as appropriate in the SES assessment process;

          (viii)  within 180 days of the date of this order, complete a review of competencies for civil engineers (GS-0810 series) and, if applicable, other related occupations, and make recommendations for ensuring that adequate AI expertise and credentials in these occupations in the Federal Government reflect the increased use of AI in critical infrastructure; and

          (ix)    work with the Security, Suitability, and Credentialing Performance Accountability Council to assess mechanisms to streamline and accelerate personnel-vetting requirements, as appropriate, to support AI and fields related to other critical and emerging technologies.  

     (e)  To expand the use of special authorities for AI hiring and retention, agencies shall use all appropriate hiring authorities, including Schedule A(r) excepted service hiring and direct-hire authority, as applicable and appropriate, to hire AI talent and AI-enabling talent rapidly.  In addition to participating in OPM-led pooled hiring actions, agencies shall collaborate, where appropriate, on agency-led pooled hiring under the Competitive Service Act of 2015 (Public Law 114-137) and other shared hiring.  Agencies shall also, where applicable, use existing incentives, pay-setting authorities, and other compensation flexibilities, similar to those used for cyber and information technology positions, for AI and data-science professionals, as well as plain-language job titles, to help recruit and retain these highly skilled professionals.  Agencies shall ensure that AI and other related talent needs (such as technology governance and privacy) are reflected in strategic workforce planning and budget formulation. 

     (f)  To facilitate the hiring of data scientists, the Chief Data Officer Council shall develop a position-description library for data scientists (job series 1560) and a hiring guide to support agencies in hiring data scientists.

     (g)  To help train the Federal workforce on AI issues, the head of each agency shall implement — or increase the availability and use of — AI training and familiarization programs for employees, managers, and leadership in technology as well as relevant policy, managerial, procurement, regulatory, ethical, governance, and legal fields.  Such training programs should, for example, empower Federal employees, managers, and leaders to develop and maintain an operating knowledge of emerging AI technologies to assess opportunities to use these technologies to enhance the delivery of services to the public, and to mitigate risks associated with these technologies.  Agencies that provide professional-development opportunities, grants, or funds for their staff should take appropriate steps to ensure that employees who do not serve in traditional technical roles, such as policy, managerial, procurement, or legal fields, are nonetheless eligible to receive funding for programs and courses that focus on AI, machine learning, data science, or other related subject areas.  

     (h)  Within 180 days of the date of this order, to address gaps in AI talent for national defense, the Secretary of Defense shall submit a report to the President through the Assistant to the President for
National Security Affairs that includes:

          (i)    recommendations to address challenges in the Department of Defense’s ability to hire certain noncitizens, including at the Science and Technology Reinvention Laboratories;

          (ii)   recommendations to clarify and streamline processes for accessing classified information for certain noncitizens through Limited Access Authorization at Department of Defense laboratories;

          (iii)  recommendations for the appropriate use of enlistment authority under 10 U.S.C. 504(b)(2) for experts in AI and other critical and emerging technologies; and

          (iv)   recommendations for the Department of Defense and the Department of Homeland Security to work together to enhance the use of appropriate authorities for the retention of certain noncitizens of vital importance to national security by the Department of Defense and the Department of Homeland Security.  

     Sec. 11.  Strengthening American Leadership Abroad.  (a)  To strengthen United States leadership of global efforts to unlock AI’s potential and meet its challenges, the Secretary of State, in coordination with the Assistant to the President for National Security Affairs, the Assistant to the President for Economic Policy, the Director of OSTP, and the heads of other relevant agencies as appropriate, shall:

          (i)   lead efforts outside of military and intelligence areas to expand engagements with international allies and partners in relevant bilateral, multilateral, and multi-stakeholder fora to advance those allies’ and partners’ understanding of existing and planned AI-related guidance and policies of the United States, as well as to enhance international collaboration; and

          (ii)  lead efforts to establish a strong international framework for managing the risks and harnessing the benefits of AI, including by encouraging international allies and partners to support voluntary commitments similar to those that United States companies have made in pursuit of these objectives and coordinating the activities directed by subsections (b), (c), (d), and (e) of this section, and to develop common regulatory and other accountability principles for foreign nations, including to manage the risk that AI systems pose.

     (b)  To advance responsible global technical standards for AI development and use outside of military and intelligence areas, the Secretary of Commerce, in coordination with the Secretary of State and the heads of other relevant agencies as appropriate, shall lead preparations for a coordinated effort with key international allies and partners and with standards development organizations, to drive the development and implementation of AI-related consensus standards, cooperation and coordination, and information sharing.  In particular, the Secretary of Commerce shall:

          (i)    within 270 days of the date of this order, establish a plan for global engagement on promoting and developing AI standards, with lines of effort that may include:

               (A)  AI nomenclature and terminology;

               (B)  best practices regarding data capture, processing, protection, privacy, confidentiality, handling, and analysis;

               (C)  trustworthiness, verification, and assurance of AI systems; and

               (D)  AI risk management;

          (ii)   within 180 days of the date the plan is established, submit a report to the President on priority actions taken pursuant to the plan; and

          (iii)  ensure that such efforts are guided by principles set out in the NIST AI Risk Management Framework and United States Government National Standards Strategy for Critical and Emerging Technology.

     (c)  Within 365 days of the date of this order, to promote safe, responsible, and rights-affirming development and deployment of AI abroad:

          (i)   The Secretary of State and the Administrator of the United States Agency for International Development, in coordination with the Secretary of Commerce, acting through the director of NIST, shall publish an AI in Global Development Playbook that incorporates the AI Risk Management Framework’s principles, guidelines, and best practices into the social, technical, economic, governance, human rights, and security conditions of contexts beyond United States borders.  As part of this work, the Secretary of State and the Administrator of the United States Agency for International Development shall draw on lessons learned from programmatic uses of AI in global development.

          (ii)  The Secretary of State and the Administrator of the United States Agency for International Development, in collaboration with the Secretary of Energy and the Director of NSF, shall develop a Global AI Research Agenda to guide the objectives and implementation of AI-related research in contexts beyond United States borders.  The Agenda shall:

               (A)  include principles, guidelines, priorities, and best practices aimed at ensuring the safe, responsible, beneficial, and sustainable global development and adoption of AI; and

               (B)  address AI’s labor-market implications across international contexts, including by recommending risk mitigations.  

     (d)  To address cross-border and global AI risks to critical infrastructure, the Secretary of Homeland Security, in coordination with the Secretary of State, and in consultation with the heads of other relevant agencies as the Secretary of Homeland Security deems appropriate, shall lead efforts with international allies and partners to enhance cooperation to prevent, respond to, and recover from potential critical infrastructure disruptions resulting from incorporation of AI into critical infrastructure systems or malicious use of AI. 

          (i)   Within 270 days of the date of this order, the Secretary of Homeland Security, in coordination with the Secretary of State, shall develop a plan for multilateral engagements to encourage the adoption of the AI safety and security guidelines for use by critical infrastructure owners and operators developed in section 4.3(a) of this order.

          (ii)  Within 180 days of establishing the plan described in subsection (d)(i) of this section, the Secretary of Homeland Security shall submit a report to the President on priority actions to mitigate cross-border risks to critical United States infrastructure.

     Sec. 12.  Implementation.  (a)  There is established, within the Executive Office of the President, the White House Artificial Intelligence Council (White House AI Council).  The function of the White House AI Council is to coordinate the activities of agencies across the Federal Government to ensure the effective formulation, development, communication, industry engagement related to, and timely implementation of AI-related policies, including policies set forth in this order.

     (b)  The Assistant to the President and Deputy Chief of Staff for Policy shall serve as Chair of the White House AI Council.

     (c)  In addition to the Chair, the White House AI Council shall consist of the following members, or their designees:

          (i)       the Secretary of State;

          (ii)      the Secretary of the Treasury;

          (iii)     the Secretary of Defense;

          (iv)      the Attorney General;

          (v)       the Secretary of Agriculture;

          (vi)      the Secretary of Commerce;

          (vii)     the Secretary of Labor;

          (viii)    the Secretary of HHS;

          (ix)      the Secretary of Housing and Urban Development;

          (x)       the Secretary of Transportation;

          (xi)      the Secretary of Energy;

          (xii)     the Secretary of Education;

          (xiii)    the Secretary of Veterans Affairs;

          (xiv)     the Secretary of Homeland Security;

          (xv)      the Administrator of the Small Business Administration;

          (xvi)     the Administrator of the United States Agency for International Development;

          (xvii)    the Director of National Intelligence;

          (xviii)   the Director of NSF;

          (xix)     the Director of OMB;

          (xx)      the Director of OSTP;

          (xxi)     the Assistant to the President for National Security Affairs;

          (xxii)    the Assistant to the President for Economic Policy;

          (xxiii)   the Assistant to the President and Domestic Policy Advisor;

          (xxiv)    the Assistant to the President and Chief of Staff to the Vice President;

          (xxv)     the Assistant to the President and Director of the Gender Policy Council;

          (xxvi)    the Chairman of the Council of Economic Advisers;

          (xxvii)   the National Cyber Director;

          (xxviii)  the Chairman of the Joint Chiefs of Staff; and

          (xxix)    the heads of such other agencies, independent regulatory agencies, and executive offices as the Chair may from time to time designate or invite to participate.

     (d)  The Chair may create and coordinate subgroups consisting of White House AI Council members or their designees, as appropriate.

     Sec. 13.  General Provisions.  (a)  Nothing in this order shall be construed to impair or otherwise affect:

          (i)   the authority granted by law to an executive department or agency, or the head thereof; or

          (ii)  the functions of the Director of the Office of Management and Budget relating to budgetary, administrative, or legislative proposals.

     (b)  This order shall be implemented consistent with applicable law and subject to the availability of appropriations.

     (c)  This order is not intended to, and does not, create any right or benefit, substantive or procedural, enforceable at law or in equity by any party against the United States, its departments, agencies, or entities, its officers, employees, or agents, or any other person.

                             JOSEPH R. BIDEN JR.

THE WHITE HOUSE,
  October 30, 2023.