Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Just got chatgpt to get me some exercises based on a medical interpretation of a…
ytc_UgyEreGVM…
G
Thankfully my job in the underground city assembling robot cops while trying to …
ytc_Ugy4QXUKC…
G
As a consumer, meaning I neither make art nor music, nor write, I absolutely res…
ytc_Ugyl24Vqz…
G
This is all wrong.
People will just build entirely new AI native company and t…
ytc_UgwAzTDfP…
G
They'd better watch out the AI being used to fly a missile to it's own self dest…
ytc_UgzNw45KB…
G
We appreciate your comment. In our live broadcasts on AITube, we feature advance…
ytr_Ugw3RjpKU…
G
I think A.I could be used as a way to get basic ideas, and later improve on that…
ytr_UgzmXrzLa…
G
Andrew's prediction on AI dominating the workforce in the near future is pretty …
ytc_UgyE5Kubr…
Comment
There is a lot of holes in these arguments. Two points: on a relative scale, most humans are doing interpolation too and AI have a much bigger pool. If you work enough, you know humans make mistakes, they don’t learn from their problems, they push back feedbacks and etc. the effectiveness has to be measure comparatively. Second is more of scale, there r tasks that if u need to do reading comprehension, you need to hire lots of ppl, train them, walk their work and etc. much more efficient with AI and even with hallucination, it could be a more cost effective method. It is not like human are way better. Humans sometimes are worse
youtube
2026-01-24T17:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzlEX1w3yvGnqlbSit4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzsYn-baN-vNCzxjnt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyBfZJT-UmJKWBx9LB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy_v7KHAhkY6dTBN3Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwnZy7whAUA_tuatlF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw_U2D2QMa3l3cN1Jt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzObe7Q9Tlpnzl9rm54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzQ-p3ZCUWJ_4BcOoZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyMHRovwGoEmGmiYCJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxISdJQfke3IZvow2R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]