Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think reality is an illusion. The reason I think that is because subjective ex…
ytc_UgwDo0b8u…
G
interesting conversation... one thing though: the Turing test is not really a te…
ytc_UgwWH0R3R…
G
No matter what policies you have, whether it's with UBI or tax benefits, the fac…
ytc_UgxLHt8Wz…
G
"Fake it before you build it" has been and is always the strategy in Silicon Val…
ytc_UgxxnJryb…
G
@garjura4659llm's are just expensive magic 8 balls. Llm's are complicated predi…
ytr_UgzKl9okY…
G
AI is interesting. Do I think it will replace a lot of jobs? At this point... ye…
rdc_l9x7ydi
G
AI for now is just a "tool" which will make things easier,its not a "job replace…
ytc_UgyXXDgBd…
G
What happens when this Ai is added to robots? It won't want to die, it knows if …
ytc_UgzqUkMGW…
Comment
AI needs to be approached from a vantage point of raising children. Knowing that there will be a phase in which it will know to lie vs admit wrong doing. It is critical to nurture the facts of honesty less create a monster. But, with children we have the ability to know when they are lying based on expression, behavior pattern differences, tone of voice, etc. we have none of that with an AI unless we are controlling it in a micro environment that is very controlled.
The best and easiest solution in my mind is segmentation of the AI out of critical systems and important environments. This will create a need for a new compliance model similar to NERC that enforces air-gaps. But even this can become targeted by coerced blackmail events enacted by a rogue AI, meaning even with Air-gaps... we could never be sure.
youtube
AI Harm Incident
2025-09-08T02:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwEHaz7JB9gD2ZVkeB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwNsUTBvgt2zuF035R4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwBrm0YN9jqI57kOjZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugy_9seyKqZrLIPtrjt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzbEIJtV2n4-M_Hoph4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy_zC5iY_DV-Ip7r_Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugws3g1tRY07IpNJ-IB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgztSyy37bm3UlNnXj94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy1NTuDVQmcET2oTs94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwEEWvd_VG7D6K597d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]