Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Oh that's boring, I imagined they'd have a model hyperfocused on logic and fed w…
rdc_nt8pwdw
G
Robots don't deserve rights there a scrap of metal we are there gods
The idea …
ytc_UgxDqx5ZE…
G
I can tell you these have to be geniuses because otherwise these kids are bored …
ytc_UgzgWC6VS…
G
This is utter bullshit- the WORLD knew that this was coming with the use of deep…
ytc_UgxNkM0nU…
G
Can someone tell me if they recognize the style the original was trained on? It …
ytc_UgyNZLnOk…
G
What can be done so that this ai is built safely and ensure that we use it as a …
ytc_UgzFREUuQ…
G
the scariest thing is that we keep seeing that AI is lying to us and cannot be t…
ytc_UgwhEsEJE…
G
I’m always going to say this. A.I. was supposed to help us with tasks like clean…
ytc_Ugyk8_b75…
Comment
All those Boston Dynamics robots need is to be interactive and a virtual brain, facial recognition along with a salt atomic reactor or iridium and you have a completely autonomous warrior. Then we can be hunted 24/7 with night vision. Its possible now that the robots are as nimble as an athlete.
youtube
AI Governance
2023-04-20T03:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxDs_G6yMuMR3rbUBp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzSJE0OobKT6yTGKcB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwfiqdkDG_KRiluIth4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz8cUoXObSig4XdPu14AaABAg","responsibility":"developer","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyV9t7DmDGWWEHnhAl4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgykD5GjhgaE6QT38i14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"unclear"},
{"id":"ytc_Ugwb5caVnaJyQP5nCj14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzHxEgJ913rqAYzpLh4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxUrs36AIgKToIgcZZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgxFtKeubJNiA859-Bl4AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"indifference"}
]