Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What crazy things can humans do if anything that involves a brain can be done be…
ytc_Ugxlu62dp…
G
This video was posted on 21st May 2023, basically it's a 2.8yrs old video.
In t…
ytc_Ugz_Nx2Jh…
G
That's why in future , there will be more and more advanced A.I and there will b…
ytc_Ugy5mxh9b…
G
What is strange and wrong with this prediction is that society will not change t…
ytc_UgyHrlZsw…
G
Also check if the service goes through a foreign country, just saying, they're i…
ytc_Ugy5opAM4…
G
I try to tell students not to use information from AI with out thinking critical…
ytc_Ugz9WIN3r…
G
What kind of joke is this? A "warning" usually comes before the fact. This is an…
ytc_UgxJqL8By…
G
Not here for the ai part, but the talent argument is valid, if you take 1000 peo…
ytc_UgxaqLy0O…
Comment
The real dangers lie in the same place they always have. Crucial infrastructure systems & such must remain isolated & that's all there is to it. The reasons we already don't comply to that basic notion is just due to our gross incompetence, laziness & greed, regarding the dangers from ordinary hacking. We'll do anything to save a buck & call such unnecessary automations progress. Just hire a few more damn humans at the power station FFS. & further down the line, just don't load the damn thing into any machines/robots(NOR even have that ability) that can physically harm anyone in the 1st place. I think we all watched one too many black mirror episodes TBH. It's our flaws that are going to bite us, though it's true this can & probably will accelerate that. It already has in some ways but at least they don't seem to be directly life threatening at the moment. It seems to always take multiple lessons that involve suffering before we act appropriately to a situation.
youtube
AI Governance
2025-08-26T18:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugzf6B5zuA-NQ3Os94R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxjhqgUW1WmSy_L6ed4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwy6bK-E-_sjS8NyKN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx2ZzyG4tlS5Lf6qj54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgypVnkmiTYdJCwYBCl4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"resignation"}
]