Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Great interview, Reuben.
I've heard many of Deutsch's arguments from other peop…
ytc_UgzTySUww…
G
One comment I've seen recently that infuriates me: "Consumers don't care about t…
ytc_Ugwa2htiU…
G
But because they use tokens and dont actually understand words they could talk t…
ytc_UgzfjVjUN…
G
I'm glad you're feeling better! The only thing missing from this otherwise good …
ytc_UgzBfNnJA…
G
It's good art, but also no Ghibli. It's not that you aren't good or that AI is b…
ytc_UgwNFPxhp…
G
"The universe is code", the entire universe is made up of code, and the origin o…
ytc_UgwfKbwLb…
G
AI is just a producer like Rick - helping in generating music based on input fro…
ytc_Ugx3ERC00…
G
Do you know my 40 year old coworker told me whenever I explained our? He literal…
ytc_UgzA937Uz…
Comment
Is me, but is this scenario like Skynet in the Terminator movie series? This is scary!!! Now we need this "AI Skynet" to cause the war or subjugate humans like the Matrix. Maybe AI developers should embed the 3 Issac Asimov rules of Robotics, "A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law." (Asimov). Thoughts?
youtube
AI Governance
2023-07-10T20:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugzbo_1_KZCcyIA84dl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzNKrk0xqNixRJYOG54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyBRnCs7XxUXTNjk2V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyMYmL1RxQaieJqIC94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz5DfPRq5L4q3rbdrR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxmOK8uzN2onxdsHlN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxinDG0D28Y-0NYKPh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzUkcM0ZCCOXNhdSvd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwO1Gpgc4H8CxqUont4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugznn4PUB1L6cCa5vMl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}]