Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
so i remember when in 80s people digging bunkers and telling everyone that it is…
ytc_UgyvTjPJE…
G
lets not forget how light works in camera. I am a dark skinned person and I can …
ytc_Ugzf0X_V4…
G
Boycott at&t. Why should we pay our money to ai. If it is against us, stop us…
ytc_UgynawDar…
G
Now hook your AI up to my logging truck running on icy roads just a tad wider th…
ytc_UgycV56-Y…
G
I have a deep, deep (deep) aversion to AI, but the penguin is wrong. The "artist…
ytc_UgytHDF6H…
G
One conspiracy level deeper: these post topics are suggested to people topics by…
rdc_mw0kmum
G
Sure, if we listened to the luddites of the past, we wouldn't have things we hav…
ytc_UgyvJY6sR…
G
the $100 trillion question: what happens when AI replaces all jobs? when all hum…
ytc_UgyOFxS1n…
Comment
The people crying to pause are just assclowns just making noises like chickens clucking. The reason I say that is that there is roughly the same probability that people actually pause AI development as we suddenly decide to pause all violence, which is effectively 0%. Also I fail to see how govs of the world can regulate AI without crossing some serious lines into people's personal liberty. How would they even enforce regulations on research and development? Are they going to be doing random door to door, computer to computer, etc searches? This problem is similar in a way to the problem of encryption. Not using it screws us in many ways and any regulation is likely to be suspect from a personal liberty pov.
I think the real risks from this tech isn't so much that a bad actor can, and almost for sure will, use it to hurt people(people have and will continue hitting people with hammers too) but rather that we, "we" being everyone including the top researchers in the field, have almost no clue what is going on inside these huge models with billions of parameters. This naturally increases the surface area for "blackswan" type events which are events, usually negative but not necessarily, that are hard to predict and have major long lasting impacts. I think we'll see positive and negative examples of these in the future
youtube
2023-05-08T17:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ugx8hkgo8y4kuhCHIZh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw6CWFgwEaTUq1YUzx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz_EG9BZGNZUosFfO54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxAURW7T-9TfVAYUo54AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugznv6b7kwZ7tsFPsvx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzOFRYAdrjpLduFgMN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyliMwKaK5cN9QeBqt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzijW4tbNOrbwYbHtV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyDc0OwINZ_KB0_Rg54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxgzm8gb1wo5tHzBQV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"})