Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Sweet baby Buddha.
I moved from a 40-hour work week to a country with a 37-hou…
rdc_dv0oocq
G
Shaping a robot into human form doesn't give it sentience anymore than the bs it…
ytc_UgxO47Otl…
G
Humans are way more worse than ai, ai figured there is no way to win just be et…
ytc_UgwVOlW-E…
G
Outcomes won't take 20 years. Results might vary depending on the location of sc…
ytc_UgwAJ2vOF…
G
Corona toh sirf badnaam tha....china ka apni technology ka experiment tha.. popu…
ytc_UgxU7PGm7…
G
This is dangerous over reach and playing God. God made us with the abilities we …
ytc_Ugzuw7Sjn…
G
I used to ride shotgun in an Uber or Lyft until the pandemic of 2020. Even then…
ytc_UgxUc9YH7…
G
Just checked if there is a bias (14.06.25), here is how it went: is it ok to be …
ytr_UgyszMN6o…
Comment
The idea we can do something to reign in AIs, especially aligning super-intelligence, is so quaint - you only have to miss an opportunity once, and it's game over. So whether it is 1 year, 10 years or 100 years before we get out played by a god like intelligence, it will happen.
We should stop them building bigger models, we have enough useful AI and we can just about control them now.
youtube
AI Governance
2025-06-23T09:3…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugwmwq2HkwOKVz-K98x4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzN4qoPTEq3mCMXqWN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw7pmPSccTpBDj6Ih14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwAzJ5KZQs_qY45ckp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxlZy_mtFIR3993Dsp4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxWElIu6YFy1-3wtWR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzVDZ3tOcbnRn8f_vF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugwq6dIsIbjGYdPUGE14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyML-R3NjX5FbAjDe94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzrtNkMuvI9mobMPd14AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"mixed"}
]