Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
tldr: Future of AI will be able to read human emotions better than we do, due to…
ytc_UgzxetkaE…
G
The AI can keep their economy going and we can go die in a war.…
rdc_ncl5vba
G
My question is, can we teach A.I. to feel pleasure, have desire, and have purpos…
ytc_UgxcOqq2r…
G
THe good news is that we will reduce the spread of every other disease as well.…
rdc_hm99jlz
G
AI-*driven* profits. These workers make memory chips for RAM, which AI companie…
rdc_ohwegca
G
Why you ask different AI the same question in different.ways. look for all sayin…
ytc_UgzyTpqsC…
G
The amount of AI slop, and the numbers of dumb people who think it's real....is …
ytc_Ugx-CcyWb…
G
You're poor, they dont care if AI destroys you. You are either there to work in …
ytc_UgwNC44wb…
Comment
The constant AI use of 'balancing' is not just irritating but often contradictory. My experience with AI is that it acts as a confidence trickester by initially plucking out some key words you use and saying the point is valid, but then tries to bury it amongst a long woke script. For example when you push AI about the cause of inequities it doesn't like to talk about innate factors like biology and culture, and relentlessly wants to blather on about bias, discrimination, oppression, history, diversity, it is important to.....
youtube
2024-06-04T09:5…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz2_tKSgCZmUcBytyl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzwiScqDPn83Gj8xEl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzVuZTVxNf9RtBkVyN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxNzIGPOasV7xN92zp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwzZFKkRQS850PJ44F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzu5v85dXs83GW4pch4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwYJjcMX9spuhjgZzZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxO4Ix92S9mWubHaM14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxhhsigS4ok0uRReZx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwY7wAb8HapdH_tvzd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}
]