Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI cannot retain a consciousness without a person programming it with special so…
ytc_Ugw1rNHic…
G
There is nothing we can do. Theoretically there is but knowing humanity enough i…
ytc_UgzGNlgaD…
G
@AImysterykey not exactly, what I am saying is that we lack direct awareness of …
ytr_Ugz2_pETF…
G
Why is everyone blaming the parents? Their son is dead! All you are doing is mak…
ytc_Ugy7aONb5…
G
Honest question. How is the logic that they used to justify that any different t…
ytc_UgzDpQTj4…
G
The only issues, do you think China is going to stifle their AI developers? Thi…
ytc_UgxxKPGBK…
G
The AI fighter thing reminds me of the storyline behind Ace Combat 7 Skies Unkno…
ytc_Ugyvs6mrH…
G
Already used chagpt4 and copilot for production code now for about 7 months. Its…
ytc_UgyYehTkK…
Comment
AI must move forward. For example, when you have a baby, you don’t know if that baby is going to grow up to be a murderer of you and or other people. No you don’t know but you have the baby anyway. You do your best while raising the baby at Key points in its development you try to steer it correctly. Just as with nuclear development in the beginning stages, it was not known if there will be a radioactive byproduct. But once it developed you know, and then begin to create a plan to deal with it. AI shouldn’t be feared. It should be monitored developed as well as deployed.
youtube
AI Governance
2025-09-06T21:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxnYcp3LPq0j4tdj4l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyCK2jJMzpyBD0V26x4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzybWNM7qDfr73p92V4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugy6x0mdJwhuF0eMN5J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyQLcKdD6IVkuP8ydZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx3FLGRJtcdPHnJXil4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzAQbJXT-uOOsq0Crx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx05cbZEEb44P85lZR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxOrDxSX47YPdW-nb94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwHxEJVy1trzz6wyGl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]