Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
complete reset is necessary. and cut internet or wipe out AI. go back to paper…
ytc_UgyW0lwXm…
G
Why do yall call it ai art, or call them ai artists.
It should be AI images, ma…
ytc_UgxXgcQwQ…
G
I had that "platooning" idea many years ago, but it was because of my crazy comm…
ytc_UgxYTdUFe…
G
I think it's because they go through artistic style periods! Just like human art…
rdc_oi40bfc
G
@Jake-cy7to sometimes ai makes mistakes and inappropriate like the car which hi…
ytr_Ugz6zU9yO…
G
not to mention that facial recognition software is notorious for poor performanc…
rdc_ghbvbx7
G
Giving u/maximumeffort433 a run for the money! I love having you both around! Ke…
rdc_dcx0s3t
G
He felt a void in his life, missing connections with his parents and society. Se…
ytc_Ugx5Y4rly…
Comment
One of the things I have found that can help people comprehend some of the potential less intuitive risks of AI is an example of instrumental convergence proposed by computer scientist Stuart Russel. Instrumental convergence is a hypothetical tendency for all AI to pursue similar sub goals that can be quite different from their ultimate goal, or rather, the goal given to it by us. Stuart Russel argues that an AI 'can't fetch coffee if it's dead', and as such, even a task as mundane as this could result in the AI overvaluing it's own survival.
youtube
AI Governance
2023-04-18T10:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzyCAGAEUyMoPg1YUV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz90xk3jw_FeSEGoYx4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwAgfY-AWMJzrZ3RW94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyh8jUcUxQdvA7JoCN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzRU9wbEjA2os2s02t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwjHxfNS8zEvAwYuDV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyMu9BAFLyDUm_Zutd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzRtC--xntBBEwAUDV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw0EIttSl3g3zlsxud4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw8ccmyjDoMcAOMQxp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]