Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thank you so much for this insightful, detailed, not belligerent assessment of t…
rdc_dcx2hav
G
Very good point tbh.
Isn't it entirely nonsensical to even bring disabilities in…
ytr_UgzKk3W_J…
G
This guy is full of it. Go look at what Steve Wozniak says about AI and what it …
ytc_Ugy_2aaPM…
G
I’ve asked ChatGPT about the India-Pakistan history and likelihood of a possible…
rdc_my71w2f
G
interesting thing I've found using chatgpt, is that the semantic of "I" and "cha…
ytc_Ugw2fBU7z…
G
I completely agree! I really miss “bad”, more strange, and early AI art. It was …
ytr_Ugy3XYLgp…
G
Using it to play a role can be useful but ultimately it's a statistical algorith…
ytc_UgzM2Eob9…
G
@laurentiuvladutmanea If its theft then it would be theft for anyone to look at …
ytr_UgzDUi6a1…
Comment
Sorry I am late Tucker. I completely agree with Elon on this point. What is truly scary about AI is we are programming and training them to think like humans. And when it comes to how humans think, most of us generally use these sorts of AI platforms to express our darker aspects. I do not necessarily agree that AI are smarter than humans, but are certainly better at processing vast amounts of data spanning larger spans of time. This gives them a predictive advantage. Further, as problem-solvers, it is not their ability to troubleshoot and provide meaningful solutions, rather it is their decision/implementation ability that is dangerous. If, hypothetically, an AI comes to the conclusion that, to solve climate change we need to eliminate non-renewable fossil fuels and coal, it's ability to determine the fastest way to achieve that end then re-engineer our technology in order to accomplish this objective is 'anti-speciesist'. Factoring in that any coder and programmer can independently develop their own AI technology with very little investment and no oversight, this certainly represents a serious area of concern as we move forward. As Elon said, 'would we even know?'
youtube
AI Governance
2024-01-25T18:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwbN0Zas7hnaOWChuN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyON372r3BSPjlx0R94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwErHKCzHsYoE2smvN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzOfuoKnFjj2fMz1e54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzEdNZn6WRC5M0fnod4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz0GyAWruZCm3lhHvp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy0WmWg99hbGRuXlNN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwz8CO1tr29pV1jlq54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz-eJOqESVTo6stdlx4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxOUju_vBA0mlOwnsJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]