Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI will become racist if not tampered with by the creators. Do memes imitate lif…
ytc_UgxNiDbDz…
G
One day there will be a court case to decide if AI should be given the same righ…
ytc_UgzovrEVD…
G
Tesla can ban features like self driving when used improperly. Not like that's a…
ytc_UgyURo9h2…
G
It’s terrifying. Even some of the AI bosses are warning governments to do someth…
ytc_UgwVAHvgn…
G
2027?
Even nowadays people are so co-dependent of technology and AI that they'r…
ytc_Ugy8doUzy…
G
This is my suggestion, you can correct me if am wrong. A more ethical way to mak…
ytc_UgxFI9AbL…
G
This is the more realistic scenario : companies do replace people with AI, but …
ytc_UgzWCl8gI…
G
Perhaps if robots would have ability to feel empathy, it would make it more acce…
ytc_UgzoMvSbY…
Comment
If AI were to completely take over some Matrix like scenario seems likely to happen eventually, and if thats the case then we're most likely already in a simulation and AI has already taken over.
youtube
AI Governance
2023-07-07T13:4…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzW71ADwMkIMKuIn3p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzFRd9eZusnZ5I33Fh4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzlffSaLs7pLqQVjVx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw4Bi5RRfukN1W8tYR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyMx0CYeU928pC6xC14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgyNnwCpi0FjDEs8Ht54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy0YUs0RX9JPu74vz54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyeG2buH6J8FSbOAfl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwTWD_WZLFw6DUzwAF4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugxi3DQfZMTYvJ2JCQp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"}
]