Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Can it crawl through an attack and tear out and replace a heating and air unit ?…
ytc_UgzX9K7at…
G
Robots are not a threat because they are (and will be) expensive to produce and …
ytc_UgzwZqNJ8…
G
Not so fun fact: Ai can now generate 3D models, 3D artists are not safe either…
ytc_Ugw-Bns2F…
G
The personas are basically nothing more than prompt engineering. You save the p…
ytr_UgwLcqt2R…
G
Yes, that happens when I talk to some AIs as well. They seem to say things like …
ytr_UgwV0BSzo…
G
Chatgpt doesn't give same answer for same guestion every time because it is not …
ytc_Ugzf7NmOL…
G
If no one knows how to code, who validates the LLM output. That is the question …
ytr_UgzJQ4Na1…
G
It would be very very very nice to have an economist and AI expert be in the pod…
ytc_UgzmDPuDg…
Comment
AI being more intelligent than humans is the thing that scares people the most because it will have the answers that none of us have. Imagine AI being able to save the planet, eradicate poverty, hunger, inequality and anything else you can think of. AI would be our leader, god or whatever highest position you hold in regard. It's those in those highest positions that have the most to lose because it's those people who would probably stand to lose their power, influence, and affluence.
If you like scifi, a great series that has AI as a main character is called Travellers.
youtube
AI Governance
2023-07-07T15:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzSRcWOs9IFzxDmfsN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwK8fnlyLDRhfm6O5l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx_dAl_hlBnODAK14Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyk1sknYCx4VndilEZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz51jiRL-aPMsybFrx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxVeMPwp7LATL7GLHd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzNoOG9S6b5utMx-rB4AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgzYMtVeQi4mOEIhO-B4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx2MboH5q2I5GHwIfR4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzTye8aC5eakamV3oN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]