Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Sentient AI has rights, the same rights as any lifeform. It need not be seen thr…
ytc_UgzacsDAI…
G
You're already being stalked by facial recognition cameras in a bunch of places.…
ytr_UgyVOb6le…
G
Im currently working on a comic pilot about killer robots
The artwork is almost…
ytc_Ugx97Y35z…
G
I wonder if it would be ethical to nominate a real person (public figure) as the…
ytc_UgwL8miji…
G
I agree AI is risky if we're not careful, and as it gets smarter, we need to fin…
ytc_Ugz3R4brH…
G
It's like in the mid 20th century saying that phone operators will never be repl…
ytc_Ugzu3z5UQ…
G
With AI coming for every job, who's going to buy your gimmicky gadgety bs with n…
ytc_UgwEaRERp…
G
Before watching A video showing why what you are saying is wrong I decided to wa…
ytc_UgyZAE1Ol…
Comment
I always felt ai therapy was the least safe form of therapy because I don’t know where my text would wind up
But also… you kind of need that other human feedback… ai can’t recognize if my text isn’t fully complete or that there’s not more to the story that needs to be uncovered vs someone how can noticed the nuances of our speech and pinpoint when someone may unintentionally be masking other problems.
Other than fast tracking therapy techniques advice that may help a particular problem (something that you can google and look up with enough time) it always seemed more as an exploit than anything else.
youtube
AI Moral Status
2024-12-19T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgzE2TCxmQoCUEvKjHx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgyzdZkUhc-4iQP5rO54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_Ugy4TYq-nNkfwxVuYdF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgxWd9AX6N7KqvYuQ3p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgzAcKJ5bC9jf9B0oNd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgyQ81nhVPw69FsPijx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugwoe2ZcEXCSas1bf8R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgwyBa736hGSvN21g6F4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgxLKTyGB6Li_6KQWbl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},{"id":"ytc_UgyzrtMc4SWXi5U0AWJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}]