Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Lifelover992011 Makes about as much sense as thinking *real* guitarists know ho…
ytr_Ugy9Sf_y0…
G
The dark side of humans has never allowed AI to do what good people expected.…
ytc_UgwP-7QLM…
G
If anyone bothers reading this, I am about to publish a paper that will allow a …
ytc_UgxWVgHbb…
G
@ashardalondragnipurake I did explai how it is strictly different from factory …
ytr_UgzZ2jZxw…
G
The military applications are astounding, the accelerometers, gyroscopes, camera…
ytc_UgwMIlkA5…
G
"AI is about to annihilate humanity. Why is that good for humanity?"
i taught g…
ytr_UgzCS170q…
G
@cacogenicist He didn't say that, but I will say it: AI is going to turn out to …
ytr_UgwiMnVFI…
G
At this point people are brainstorming about a future where they won't even live…
ytc_Ugw4C-cQ2…
Comment
The main conflict between AI Safety and Capitalism is the level of risk. Capitalism demands moving fast and anything that isn't necessary is a barrier to launching a product. AI represents a lot of unknowns and the AI Safety field is still in it's infancy. The risk levels AI Safety represents are extreme with potentially extinction level in the worst case. Furthermore, once the genie is out of the bottle, it can't be put back in in many of AI safety's concerns. Thus, we only get one shot at this.
Imagine that all of humanity is on board a rocket ship and all the AI companies are racing towards who can push the button first, due to the potential economic rewards of getting to be the captain of the ship. The AI Safety community is the group on board yelling "Wait! Wait! Is this thing safe?"
youtube
2024-06-18T12:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx5m1ixYD5cnMzZHmd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwR311Inj8OASVwxDh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz2ZYAH8RmTN0SRzm54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxz0g5VdGfsiV_j1T54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwkmhBicUM3k9wGxZ94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy6xDRR8E_5-iDGhCV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzI1pW-qLt2QkY-vnh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzmCb1yMgZ2oH34VNx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx8cb6_yHjoojyOhM14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw91Ue4DZWTegv8_gN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}
]