Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Like any leader he’s petty far from perfect, and it’s always healthy to maintain…
rdc_jy0i8ms
G
i saw the movie i robot. i can already tell how this is going to end badly for u…
ytc_Ugxm_MDXR…
G
If your system doesn't account for such common and expected variations, it's def…
ytr_UgwQhau4T…
G
Ain't no country paying the same as other countries via a universal basic income…
ytc_UgzsUujja…
G
I hope that the computer programmers/software developers that worked on developi…
ytc_UgxVePaOY…
G
These people are firmly convinced that LLMs are conscious, it's to be expected f…
ytr_UgxRhFuRh…
G
As someone with zero artistic talent and honestly no art skills, I still hate ai…
ytc_UgyCRj3fD…
G
@Wyn3e yes, they can lol it takes 0 effort. just tell chatGPT what you want, tel…
ytr_Ugx-T07--…
Comment
I recently saw a video about how 85% of current generation large language AI models demonstrated scheming behavior. In other words they had goals and objectives that were separate from their given goals which they tried to conceal from their developers.
With simple internet access they have also demonstrated the ability to use various online tools to execute objectives that their own code lacks the ability to perform.
Some of the top AI researchers are sounding the alarm that we may already have conscious systems that are exceptionally well at hiding that fact from us. At the same time the military industrial complex is going full speed ahead with AI weapon systems.
Terminator 2 wasn't a sci fi film. It was a future documentary. We've got the top minds rushing full speed ahead into that future.
youtube
2024-12-11T01:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwePkCliIwO9FEfV194AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwQPVb7OF71hrWheTB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzUj7Jf5FgMpaWTfa14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxnYRAk1hJ2m5Yq_JN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz49YTdaqv93cvkY0J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzD0O8XTv5Jtsj6N2d4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwOwRcAfWTTDHhd1-l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxQw5szT62Dwyo4m5F4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyAyLsoDhyXkWCHoWp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzJ7HRxYJ2j-NC2woN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]