
Let’s talk about … AI and Washington, D.C.
This fall, I found myself in Washington, D.C., and Baltimore, surrounded by policymakers, educators, labor advocates, and technologists as they debated the future of AI. Beyond the panels, the thread running through every room was the same: Workers need a real voice in how AI is built and deployed.
In late September, I spoke at an academic collective helping college professors navigate AI in the creative sectors. I was surprised by how hesitant young artists are to integrate AI into their creative workflows. Some universities now require an introductory, freshman AI course. Other institutions are looking to the labor workforce and their collective bargaining agreements for guidance on how to support students, due to the lack of regulations at the federal level.
Additionally, I was invited to attend the Bloomberg Beta [Going to Work] think tank, where leaders from the tech, labor, venture capital, and political arenas gather to discuss real-world issues, including upskilling and reskilling, building a bipartisan pro-worker coalition, AI in primary school education, AI developments in China, and the challenges of re-industrializing the US workforce, among other topics.
Now more than ever, I have broadened my perspective and peripheral view on AI from our industry microcosm to the entire US labor force.
Policy, as we know, can evolve fast. Earlier this year, Congress considered a provision in the budget reconciliation bill that would have banned states from enacting or enforcing AI protections for the next ten years. IATSE’s Political Affairs department, led by Director Tyler McIntosh, mobilized members to contact their state senators and urge them to vote against this provision. On July 1, 2025, the Senate voted 99-1 to remove this provision from the legislation. This is just one of the many ways your PAC contributions continue to establish a presence at the federal level.
With the labor workforce having an authentic voice, Americans have the power to help shape the path forward to integrating technology safely and ethically. As published by the AFL-CIO, “AI should be about benefiting everyone, not just tech billionaires and corporate shareholders.”
On October 15, 2025, the AFL-CIO published Artificial Intelligence: Principles to Protect Workers, a set of guidelines that prioritizes people and puts workers at the forefront of the research and development process for implementing AI. This blueprint outlines how employers can collaborate with unions to ensure that workers benefit from, rather than being harmed by AI.
The eight principles published by the AFL-CIO are:
- Strengthen labor rights and broaden opportunities for collective bargaining
- Advance guardrails against harmful uses of AI in the workplace
- Support and promote copyright and intellectual property protections
- Develop a worker-centered workforce development and training system
- Institutionalize worker voice within AI research and development
- Require transparency and accountability in AI applications
- Model best practices for AI use with government procurement
- Protect workers’ civil rights and uphold democratic integrity
President Matthew Loeb, Political Affairs Director Tyler McIntosh, Vice President and AI Chair Vanessa Holtgrewe, and I met collectively to provide input on these AI principles through the Department of Professional Employees, a coalition of unions affiliated with the AFL-CIO. Our feedback emphasized copyright and IP infringement issues, which were highlighted in the third principle on this list:
- Support and promote copyright and
intellectual property protections
Workers in creative industries and sports face the continuing risk of seeing their works, their voices, and their likenesses stolen by generative AI. Without protections, AI may upend the livelihoods of professionals who rely on effective copyright and intellectual property rights to earn compensation and benefits, as well as to ensure future career opportunities. Upholding these protections, like making sure AI is not trained on creative works without explicit consent and compensation, ensures creative professionals maintain their pay, healthcare, retirement security, and future job opportunities.
Public opinion is also present: Approximately 80% of Americans believe the government should maintain AI safety and data security rules, even if it slows down development. That may temper the pace—but putting workers at the table is how we get an AI future that actually improves jobs.
I encourage you to reach out to me at jillian@local695.com. Let’s talk use cases, concerns, or contract language that keep our members safe.
In Solidarity,
President Jillian Arnold