Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Adobe is working with AI too. What do you think about that? I am worried because…
ytc_UgwKEaZEr…
G
Don't you try a robot to be made as not human but more higher God or goddess…
ytc_Ugypl6p87…
G
Even then, "a bad worker blames the tools" or something like that. If anyone ca…
ytr_UgyZKdHOo…
G
It will not eliminate all the jobs in an area, but it might reduce the demand, y…
ytc_Ugwne8JeV…
G
Reduce personal cars down to singular essentials. They can come with self drivin…
ytc_UgzIri4-X…
G
Current G-mini model is definitely sapient capable of thinking and not just retr…
ytc_UgxRuUJrv…
G
The same shit happened with computer's.......
Millions apon Millions lost jobs …
ytc_UgwsiTdPr…
G
I've voiced the same thought process as some people on here have: redrawing gene…
ytc_UgwlaHFTy…
Comment
The classic example is: what happens if you create a sufficiently smart coffee robot? It will notice that it can't fetch the coffee if it's dead.
AI Safety scientists predicted patterns of behavior (like self-preservation and power seeking) in future AI systems before the current architecture even existed, and now we are observing the same results empirically.
The reason they were able to figure it out in advance, is that they noticed that power seeking isn't a property of humans, or a property of AI. It is a property of goals. If you have a goal, then you will almost always have specific sub-goals that are instrumentally useful, no matter what the goal is.
youtube
AI Responsibility
2025-05-21T21:4…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgzNj7LoawaE790nan54AaABAg.AIOny8dV3GbAIP5toPAVf-","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzNj7LoawaE790nan54AaABAg.AIOny8dV3GbAIPM8kuv1Ud","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgwYyW2bpzuFuFpRbl94AaABAg.AIOnhJi3q4dAIP6ei-HXhJ","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytr_Ugy_nk2EiHLvLd4sPht4AaABAg.AIOjp_O-TDKAIOqV1-x02s","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_Ugx6ly5qkRd63SuRPdJ4AaABAg.AIOhAbV7pF1AIP5YgJCQxI","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugx6ly5qkRd63SuRPdJ4AaABAg.AIOhAbV7pF1AIRDZKS0Q5M","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgwRrnE4r2E48ZpjcoB4AaABAg.AIOgVOCxZIvAIOxEYy0pzW","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytr_UgwRrnE4r2E48ZpjcoB4AaABAg.AIOgVOCxZIvAIP95V2knev","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgwRrnE4r2E48ZpjcoB4AaABAg.AIOgVOCxZIvAIPaJGgVDf9","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgyKckZe8u1grR1nO1l4AaABAg.AIOcUTaOsoQAIOr4b-GZjS","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}
]