Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
How are you a Tesla owner and don't realize that all Tesla's have at least 2 for…
ytr_UgyDzA8Sv…
G
Waymo cooperates with police. A little context matters. People don't burn them j…
ytr_UgwN-tYiL…
G
There is a God as such, which will always be far more advanced than AI, AI will …
ytc_UgzfTqQwV…
G
AI will simply make everyone work less hour yet live a good life. No worries.…
ytc_UgzPMdF24…
G
Why does no one understand that this isn't some 'Skynet' scenario? These are lar…
ytc_UgyvzDSxA…
G
The more you think about this the odder it gets. All these companies that are ra…
ytc_Ugz6iRgQz…
G
This is the most worrying information I heard in the last few years, including a…
ytc_UgxmYO1ty…
G
so its a great code and many others coming since LLMs is just the beginning... A…
ytc_UgymfcPcx…
Comment
This is simply an advertising beat-up. Make sure not one of the irresponsible ‘creators’ of so-called AI benefits financially from funding to deal with the ‘existential threat’. This is fundamentally why non-consensual technological development cannot be allowed to proceed. This may have endangered humans, without their consent. It’s symptomatic of an unmitigated arrogance if it has, and the scientists who ‘created’ it must be held to account, if not for genocide then for contributory negligence and endangerment. Gaol will teach them why it is poor form to produce unauthorised weapons of mass destruction, no matter which side you’re on.
youtube
AI Governance
2023-07-07T07:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzLVgYV3FyTij9Mbtt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwtIu7FTb1_wYuOsyp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxoCt89eBNlLTPT25t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzC3dVGfw5UC3s23cl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugyz9sJ9ELxjdhkGfrp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyENxhpxYn3QnYeTZV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzmTNfpmF7z8CXluLF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxQjlaKGfYk3wMTx0p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwSWLuAbOCjzbHqEmV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzODbk_nIdN4ekBESx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]