Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Even in science, media still have anti China propaganda 😅 the one who use AI is …
ytc_Ugx1UbHH2…
G
Previous technology revolutions were focused on tooling and systems. This is en…
ytc_UgzdfssRw…
G
The demerits from my points are: If I give examples from my academic experience,…
ytc_Ugz42O6l6…
G
She thinks AI bad?
Have you seen recent movies.
Disney remakes of last movies al…
ytc_Ugxjn7c6b…
G
@johndowland4623 and if you think camera make a creative decision i answer it de…
ytr_Ugwv8W_LT…
G
Okay, feel free to hate me, but let’s please at least have a conversation about …
ytc_UgyXZyVVx…
G
They don’t really have intuition. Many books aren’t even known. All have their o…
ytc_Ugzlx__mz…
G
@timbonator1 we build frames for capitol equipment used by chip manufacturers. …
ytr_UgzugsPcK…
Comment
This is a big problem, but AI Safety is much more critical. 38% of AI researchers believe there is a >10% chance that AI will cause HUMAN EXTINCTION, and they're building it anyways. source: a paper titled "Thousands of AI Authors on the Future of AI" (i'd give a link but youtube blocks those). If you want an explaination of the problem, Rob Miles' youtube chanel has a great 20min video titled "Intro to AI Safety, Remastered". We need AI capabilities research to pause because safety research is way, way, harder. Please help us Bernie.
youtube
AI Jobs
2025-10-08T05:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzIPov4sptDsX-W4rZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzN0a8mziqYMkidcdd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgxMUwMnkMBai98WGq54AaABAg","responsibility":"government","reasoning":"mixed","policy":"none","emotion":"outrage"},{"id":"ytc_Ugxp-E_S4B_8mymKZ3t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_UgzvxWTolgQVRQUS9Dd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugy2OLLxwrNYb9aKvvx4AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"outrage"},{"id":"ytc_Ugzev_pl7UzfXjuFjfZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugwl8raAiFyJAbgh35N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},{"id":"ytc_Ugy2-0f96HsM6zF_AO54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_Ugwp3fhtQiQhedb5G294AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]