Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So we need to fix our systems and ourselves before we work on AI is whatcha mean…
ytc_Ugz7vWRyQ…
G
If Ai really is smarter than us and wants any kind of longevity for its life, th…
ytc_Ugz3yMuc2…
G
Yah I see no problem with ai porn generically. Just it absolutely shouldn’t be o…
rdc_lgmmafa
G
AI learning should not be based on random internet data, but real statistics and…
ytr_UgzloawuG…
G
this robot must be talking about what is going on right now in the government so…
ytc_UgxSKeyGM…
G
Unfortunately he keeps referring to AI as automation and that disqualifies his p…
ytc_Ugwt-0Zzj…
G
AI gives me starting snippets that I still need to validate or reshaped accordin…
ytc_Ugz9iM7KO…
G
Really? I was expecting the robot to say it out of no where. Not by answer a que…
ytc_UgxQIkK3I…
Comment
Practically it's not possible for an artificial intelligence to trigger an end of the world artificial extinction event not with our current leves of technology we have machines and robots which are good at certain specific tasks ...thats the main barrier and these chat bots are dumb fu*ks they are capable of nothing the main pain in the ass is the military ai. If they gone rogue they can start a nuclear winter but those things are trained differently and probability of open ai infiltrating military network is something i am not sure of. These regular ai s can go rouge and slow our progress i think but in case of performing action what can they do because a helicopter can only fly and a home pc doesn't suddenly grows arms and legs first the end terminal machines need more advancement to a level at least 20% of Transformers so doomsday can happen but skynet is still 200 years in the future...more realistic scenario environmental disasters will significantly reduce our population by then.
youtube
AI Governance
2023-12-12T10:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz4XdVajD4Cqx6u2kx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxWFz_aWDnQA0GT4kR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxq-LGFIeXXSnk9Xn54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgwZhLbhamsHRfeuDGt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxtsPfw4nK4IlK_Twx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugym3ljHtGQv-E52PZp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx6ZEEU5hjecBn3xrt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwCKNI_2-6kzC3nxnd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxZ21-qpMzgkzSFXhB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw5rrhm07zuygRZc9Z4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]