Merriam-Webster’s 2025 word of the year was slop, as in AI slop, and it’s easy to see why. Increasingly, AI has seeped into everything, our phones, our work, our entertainment, and it’s not all good. Merriam-Webster defines AI slop as “digital content of low quality that is produced usually in quantity by means of artificial intelligence,” usually with little care for accuracy, context, or truth.
The editors go on to say, “Like slime, sludge, and muck, slop has the wet sound of something you don’t want to touch. Slop oozes into everything.” The word evolved from meaning soft mud in the 1700s to food waste in the 1800s, and now captures our frustration of finding AI content flooding the internet. When you have AI flooding everything, anything of value will get lost in the mess of AI slop making it more difficult to know what to trust.
From Detection to Discernment
Recently, a friend showed me a video, one of those clips that makes you stop scrolling. After watching it, I told her, “That’s AI.” She paused, then replied in genuine frustration: “OMG Elsa, why can I not tell the difference??”
If you’ve ever wondered the same thing, you’re not alone, and you’re not failing at digital literacy. AI-generated content today doesn’t need to be flawless to be effective. It just needs to be convincing enough, fast enough, and emotionally engaging enough to slip past our usual filters.
However, it’s time we shift from asking “Is this AI” to “Can I trust this, and how can I verify it?” especially for health information, safety, policies, and news reports. I find that the best approach is good ol’ information literacy. My go-to is this three-step approach:
Source: International Federation of Library Associations and Institutions
This strategy is simple enough to remember and it generally works well when assessing AI content. The same traditional approaches we’ve always used to evaluate information still apply keeping in mind how AI-generated content differs.
A Practical Starting Point
In my previous post, I introduced AI as a tool that’s already woven into our daily lives. AI isn’t just something happening behind the scenes, and whether we realize it or not, most of us are already encountering this kind of content daily. It’s showing up in the videos we share, the images we react to, the headlines we scroll past, and the content we trust.
Law firms are using AI to draft documents and legal briefings, news stories are being generated by pink slime journalism (a term borrowed from the food industry to refer to low quality partisan content), and the U.S. Department of Transportation will begin using AI tools to draft safety regulations. The reality is that we are all either using or consuming AI in one way or another.
As tech companies push AI products into more aspects of our lives, more people are becoming curious and are choosing to use these tools in both their personal and work life, for all the reasons I covered in my last post. As librarians, it’s important to stay informed so we can responsibly guide and educate patrons who are already using AI or are curious about it.
So where do we start? With the same critical thinking we’ve always used, just applied to new tools. When considering adopting an AI tool, there are important questions we should ask, and teach our patrons to ask:
Privacy and Data Security
- What data does this tool collect? Does it collect personal information?
- Is my data used to train the AI model?
- Can I opt out of data collection or training?
- Does the tool comply with FERPA (for schools), HIPAA (for health info), or other regulations?
- What safeguards are in place? Are there content filters or guardrails?
Terms of Service
- Who owns the content I create with this tool?
- Does the company reuse or sell the content I input or create?
- What are the age restrictions?
- Many require users to be 13+ or 18+. This is especially important to know when you are considering using a digital tool with kids under 18.
- Does the free version have different privacy terms than paid versions?
Intended Use and Limitations
- What is this tool actually designed to do well?
- What are its known weaknesses?
Organizational Considerations
- Does it align with institutional or state guidance?
- Are staff prepared to explain and support its use?
- How will we communicate to patrons about how we’re using AI?
Red Flags
- Vague or missing privacy policies.
- No option to opt out of data collection or training.
- Lack of transparency about how the AI works.
- No clear contact information or customer support.
- User reviews mention data breaches or misuse.
Moving toward Transparency and Accountability
As AI content spreads quickly and becomes more integrated into many aspects of our lives, there is a growing call for transparency and accountability, including at the state level. State governments are moving toward greater protections and awareness around AI. In 2025, about 38 states adopted or implemented a wide-range of AI-related legislation addressing bias, transparency, deepfakes, and more.
In June of 2025, Texas passed House Bill 149, known as the Texas Responsible Artificial Intelligence Governance Act, which is codified in the Texas Business & Commerce Code (Chapters 551-554) and went into effect on January 1, 2026. Notably, under this law, governmental entities are required to let people know when they make an AI system available to the public and a person interacts with that system; it does not require notice when AI is used solely for internal purposes.
For instance, if a library catalog uses AI to recommend books based on a patron’s reading history or to suggest research resources, the library may need to inform patrons that AI is behind those suggestions, depending on how the feature operates and how it qualifies under the statute. This disclosure must be made before or at the time of the interaction, and it must be clear and obvious and written in plain language. If disclosure is required under the statute, it needs to be stated whether or not the use of AI is obvious, such as a chatbot assistant on a website that answers questions about services.
These regulations highlight a growing emphasis on transparency and accountability, which support the information literacy skills we’ve always taught: knowing the source and method behind any information and whether it can be trusted.
Staying the Course
AI use is expanding from making viral content to becoming part of everyday decisions and interactions. The path forward isn’t fear or avoidance but staying the course as librarians and doing what we’ve always done: guiding our communities through the information and technology landscape, helping them ask the right questions, evaluate what they find, and be thoughtful and intentional when using technology.
Questions? Let’s connect! Have a tool to share? Send it my way and I’ll spotlight staff picks! etdominguez@tsl.texas.gov
Subscribe to our newsletter to stay informed on what Texas public libraries are doing with technology to serve their communities and share your own tech stories to be featured in a future issue!


