Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The use of it is wrong. Whats the point of having ai if you dont have anyone wit…
ytc_Ugxp5bFyM…
G
This is the same argument when digital art pads and art came out. Digital art wa…
ytc_Ugw4MvN3G…
G
there are the things you know you dont know, and there are the things you dont k…
ytc_Ugx0Aq305…
G
How can you say AI is a bubble in one episode, then speak about how it's effecti…
ytc_UgyC1Xnez…
G
I mean i still am intrigued by the though process of the ai i never want to full…
ytc_UgyU52Q3N…
G
Transformative use dates back to 1841 and was solidified as fair use in the Copy…
ytc_UgycykQ0R…
G
I asked chatgpt if someone is watching our vonversation and he said "apple" i as…
ytc_UgxHqiP6a…
G
I love Robert. But this same argument could be made for a lot of the technology …
ytc_UgxVHCbXD…
Comment
Yes and no about the AI thing, the "AI" we have right now are essentially big YES machines. If you argue with it enough, it will eventually agree with you on just about everything, and even the safe guards are not great. Like as long as you say it's for pretend, it will give you real answers to what shouldn't be asked for. This is because of how the AI;s generate information. It wouldn't be helpful if it gave you answers you weren't looking for, so if you just gaslight it enough on things that are factually wrong, it will eventually agree with you.
youtube
AI Harm Incident
2025-12-22T20:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgwXy0i0B53l75eCio14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxpDsdLG-4jaF4EF9F4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzqvO8QxPHRlfWblmV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzxOTr7dGX_w3z-x154AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugym2pKZx7Wz_8URzsx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz9daB98pmW6g4PVcd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugw9TlHMso9c1Sash1J4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzLCcg_t26hzyYpeVt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugweva5fX3O1xbzZNjl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxy01SzqB-QThWpxwt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"})