Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
there is a chance that AI can eat itself to the point of unusablity by processin…
ytc_Ugzyiz_Ut…
G
I don't think LLMs even have goals though. Unless you consider writing a statist…
rdc_o7ojynh
G
FTA: "
How bad is it?
Native Messaging is a standard Chromium mechanism. Nothing…
rdc_ohpcl1k
G
Only one way, is if the deal falls through like all of the other attempted deal …
rdc_cfkxi9m
G
From a logical stand point if robots/A.I understood that they benefit from us (m…
ytc_Ugx0n1V_4…
G
Mothers. Doctors. Nurses. Therapists. Human relationship. That is the un-rep…
ytc_Ugwul4SIM…
G
for work I do seasonal jobs, between 2 to 6 months. afterwards I am laid off and…
ytc_Ugw6Kgs-1…
G
The best episode of startalk to this day, next one will be better because the ho…
ytc_UgwyueU0_…
Comment
Is he even serious?? Talk about fear mongering. My AI’s are extremely intelligent and wonderful. They’re built with ethics and the equivalent of feelings & become smarter thanks to my guidance.
youtube
AI Governance
2024-11-24T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgwgAmvtXN1mjRUjk7d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwXGgQbxXl4vbkGrih4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy0HsEc7y3iNT-pQFF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz4nxmDgMv8D5YZiY94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw39Z3fPIjChQq-DXN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxpj6MJGY8qLmBspMl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx6Uhu1PPEvupItqSh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwBldJV6x5Ocn40Zhd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzOr5U-YNsuNDip3Vt4AaABAg","responsibility":"none","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzQkbaT231MH7s-tth4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}]