Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI being used to generate a quick base to then build off of is fine in mu opinio…
ytc_UgwcjKhGi…
G
Just imagine some sort of Robot Revolution if we treat them like crap, like the …
ytc_UgjY1qXS_…
G
At no point did chatGPT become confused. This an example of how not to use click…
ytc_UgyWYjQG-…
G
AI music I guess would be 1000's songs (and the artists that created them) point…
ytc_Ugybe17By…
G
Bro I was is the job and I see my friend get attacked by that robot and I tell m…
ytc_Ugybq74oi…
G
And will likely do that. It sill start first with generated lesson plans and ot…
ytr_UgwI3NM3t…
G
So I have something that will someday become a disability, but today it GREATLY …
ytc_Ugy_WkFhA…
G
Im laughing over the fact that AI do be racist. Its like binary prefering 1s ove…
ytc_UgzBBaovc…
Comment
As long as inhibitors on the ai embodiment prevent the understanding or learning of violent actions/behaviors we should be alright. But then the argument could be made that that isn't a true AI unit, so in the end we are inevitably left having to take that chance that the AI unit may end up determining that humans are to be exterminated.
youtube
2017-11-28T00:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxQw7YfMcOhg6zyCbR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyZKKVQOOweXnuzyGR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},
{"id":"ytc_UgxMvnr5ixkjJGjQTgp4AaABAg","responsibility":"elites","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy1DPqIDMxmnKwbdc94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzQi1gAtvrINJhUugx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzA4QfdzSS2WxK1u6l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz3r5ONYiHca-oWIdd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UghmHBsOLD4fY3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"approval"},
{"id":"ytc_UghA24C7Vxvn43gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgjnnBXlqmRuLHgCoAEC","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}
]