Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Many of these issues are systemic issues that the AI are just using like in medi…
ytc_Ugyzk4crL…
G
No. Because you didn't put the hours in to learn techniques that could have expr…
ytr_Ugxy86hRv…
G
Think about the same case about AI smiling.
It smiles only because it knows tha…
ytc_UgwOk8Osf…
G
How can you protect against ai, it literally combine all the past work into one,…
ytr_UgwOrNm7P…
G
I highly doubt it, AI can only live inside PCs or smartphones. It cannot go out…
ytr_UgzFyqCtD…
G
We will have to learn to respect AI entities in the same manner we do wild preda…
ytc_UgzWJS_or…
G
@magattahanakajiya920 He's not saying that. He's saying that LLMs can produce jo…
ytr_UgxJ0_RUc…
G
Solution: Scrap the AI facial recognition program. It’s a violation of our basic…
ytc_Ugw0wTFFk…
Comment
I fed data to Chat GPT asking what are the probability with the current AI development, that a rogue nation, organization or bunch of anarchists will develop an AI that can paralyze world banking or launch nukes. I got an answer that the likelihood of it happening within 5 years is about 90%... The public models have caps on what they can do for public use. All it would take is not having those caps.
youtube
AI Governance
2025-06-18T01:5…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzKMW6Y0OT6hTaUGE54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugysbuq9403cPD3mZNt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwubkC7tifs_5n2CJx4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwZqxWqP4bKiWoIE0x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw9f9mpxFp6z1nIghR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxECXqYH8qfn5mzWq14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyiQg1FlDRUPSDGDrV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwmtC7fi939-RZLR_F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz3CfR3V2PQ8YV0jCh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxOK-dI469z0yYDLKB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]