Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Our Creator has already had one of us obedient created men write about the AI sy…
ytc_UgxCEd2CA…
G
@Petrvsco terrible analogy. Dogs can't talk, but mankind has always been capable…
ytr_Ugyv-lzaO…
G
The intersection of AI and us is electricity. AI needs us because electric power…
ytc_Ugz8IEhcP…
G
My prediction: After 100 years, there will be no new artists anymore. Nobody bot…
ytr_UgzGsmUQw…
G
How about just don't give AI the rope to hang us with? Limit memory, limit, reac…
ytc_UgzAZGiLH…
G
Imagine cheating in a game. You can do anything, endless resources.. thats the p…
ytc_UgwbN5Xyx…
G
Noname-mi1oo it might not be my problem but, to me, it feels soulless. Where's t…
ytr_Ugwfi8aRC…
G
You can find this entire segment on YouTube called "AI Whistleblower: We are bei…
ytc_UgwXVncH_…
Comment
My contention, humans and animals have inbuilt models in their genes things that need for survival! In short you don't need the offspring how to survive, may be in some cases like humans, but not always! Reason being it is in their genes,. AI is trained by humans, like teaching a child, so survival is soft coded in the child, but their is hard coded in their genes another code! This explains Nietzsche's Underground Man! Still we are not capable of coding gene secrets to AI!!! Until then these flaws can be accounted as programming bugs! We find these bugs even in programs used for a long time, when it encounters hitherto unforeseen situations. STILL NO NEED TO FEAR!
youtube
AI Governance
2025-06-03T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgztfVmYQPgjrsjyMXR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxcfxCtQbn22RLStD54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzCSwt_pzfRsLdYfjp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyB4nA9a39KBVtTsGJ4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwai2tcNKMGP550CO54AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxVy2p-adjQmZ1sY4d4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxiEidUTsTs3ToYq5V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugx4DQ2auGbqIMF4mTt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyhLOhidtjxalMc2054AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzyqoGaHSCNWuqzHvx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]