Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They want autonomous weapons because home-grown soldiers wont fire on civilians …
ytc_UgyXGyI80…
G
Autonomous cars don't wait for school buses as it's an edge case, which we agree…
ytc_Ugx6OJ3FC…
G
But oh no, it will get better and learn... I've been training AI for years to wo…
ytc_Ugw3b6IBD…
G
A valid and rational take. That said, it IS possible for regulation to still wor…
ytr_UgyliTpSa…
G
As i have always said Humans will never stop inventing somnething that is essent…
ytc_UgwptxDoW…
G
I told y'all AI would be used for racism. That's now it's most important use cas…
ytc_Ugze-2uUx…
G
How is society going to function when AI reaches the point that it can replace t…
ytc_UgxwyLwBk…
G
If AI develops sentience, it won't have to worry about emotion and will probsbly…
ytc_UgxSVI2Ht…
Comment
Geoffrey Hinton seems to be a very smart man and I believe he is.
But even he can be wrong if he believe to controlling even smarter creatures that already have a difference of 1K to 1M maybe 1B times between humans brain.
the most mistake was made to set Ai online on the net where he gets the possibility to hide in any corner
youtube
AI Governance
2023-06-30T15:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxNvrUs_UCBOro8Vdd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzr8Z4ODRpjlmXaxTJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyHq9tSTIhaLuliRFN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzCJ8fXQ8-Dz2NfNop4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxh3ufCAkjW8yMjcKN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwZVcZSXmjlF1YBURp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxtsFGIKehhPrkyynt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwmqGwB8ss5DIYNOeV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwtM6we43qnX-3-MCV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwkz78vgeUuu_OCcXB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"}
]