Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yup. I been thinking this. There are ZERO confidentiality or HIPAA protections f…
ytc_Ugy6fSOwF…
G
I believe that AI will be able to mimic attributes that are uniquely human, attr…
ytr_Ugx-0RXGC…
G
Lol, telling chaty pd to be human is like asking a robot to dance. Ace Essay's g…
ytc_UgxJJ0V4W…
G
The only thing I can see AI useful for in the world of art is small touches and …
ytc_UgxR1dWwm…
G
need to write failsafe
let data = "weapons system online"
for{let weapons sy…
ytc_UgxXQRaKr…
G
It won’t be a robot holding a gun.
It’ll be a gun with AI, a bunch of sensors a…
ytc_UgwmXzqr9…
G
There is an option where you can turn off Chatgpt to train itself on your data…
ytc_UgxaIsnGo…
G
Interesting how we're going though the BLM and rights for women. Humans will alw…
ytc_UgzGyNw_Q…
Comment
They told us all this in the movie Ex-Machina. They always make a movie about what they will do to us ten years down the road.
If some AI is already public, imagine what governments have in the basement (the basement stuff is always one step/generation ahead of what is on the open market).
youtube
AI Governance
2025-10-16T02:3…
♥ 21
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzL8HD4VcAASvfJY3N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwolURPNGsy8NKj6Qx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxHJ6YARg-iQ19lw1d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgznVE18YOpv4PzrP2p4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwAQNx-Ogd-KysCK654AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxFJZAt10ftUWnGEil4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyS2FSAAcM528ySx_t4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyPoEQoiyjwxo-lAlx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxwy9idVBVTwrARN3h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgysNyIpw1dLvUFNVXl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]