Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
He left out the most important part. He was shot because the police were surveil…
ytc_Ugx-uSAm2…
G
Quickly. Give us all your water and electricity while living in poverty due to A…
ytc_Ugw2CwBCM…
G
7:15 "All of that is meaningless if there's no originality behind it"
AI art has…
ytc_UgwctJWFl…
G
Liberals should be banned from any a. I . Development. You ain't sly with the ro…
ytc_Ugx0I7niE…
G
You ironically managed to make the ai make better art, the original creativity i…
ytc_Ugz6rmdGk…
G
Ah yknow I just get the feeling that AI wasnt necessary in the slightest whatsoe…
ytc_UgwK4ebkm…
G
Just give them hands and legs, and then capacity to think for themselves, then g…
ytc_UgyKzWkoi…
G
So, is he telling us there is no way (at present time anyway) to teach AI to rec…
ytc_UgxDfWvyQ…
Comment
@Zac_Frost Sure, it may not be SkyNet, but it doesn't have to be that sophisticated to do real damage and you don't want to wait until you get to the point of SkyNet then realize, "Oh, maybe we should have had some regulations about this". Because the danger is we probably won't even know when we have crossed that line with AI. There currently exists the blackbox phenomenon where the AI starts doing things it wasn't originally programmed to do and the programmers don't understand why or how the AI was able to achieve that capability. 60 minutes also recently did a report on AI, and they talked about how someone started typing to the AI in a different language and then the AI started responding in that language even though it had never been taught it and they don't know how it learned the new language. So, the danger is not just in whatever we may be teaching or programming into AI, it's what AI might spontaneously learn and figure out on its own without us not understanding how or maybe even being aware until it's too late.
youtube
AI Governance
2023-04-19T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgziWOaTSjVb4GzmXaJ4AaABAg.9oef0ZoHCh79ogt0DRWww-","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_Ugxm1WtIYn8kbO6uN3F4AaABAg.9oedhmaszTb9oefRZ00IY5","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugxm1WtIYn8kbO6uN3F4AaABAg.9oedhmaszTb9of72tqa9oP","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytr_UgxMxq-KaA9T1hW2RU54AaABAg.9oedhjwQq_59oent1Ej0CO","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytr_UgxMxq-KaA9T1hW2RU54AaABAg.9oedhjwQq_59oexcz1Re9M","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_UgxC-fAOu6bBau1BbrJ4AaABAg.9oeWz5YtJd79ofIakgh14P","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgwPCTtYamLDBN3RnNN4AaABAg.9oeWgPBkBAo9oepAg99pwt","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytr_UgzWIJY3RcdHVN22d9V4AaABAg.9oeWIZGvtLn9ofTXngCRGg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgzpdL94KlDqyKMHAlp4AaABAg.9oeSE2lJoN19oepfW_d9OH","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzpdL94KlDqyKMHAlp4AaABAg.9oeSE2lJoN19og9LeCnl1g","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]