Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Is this situation is kind of like your front door if you post something on the I…
ytc_UgwuUXJc8…
G
The graph at 5:27 IMHO tells the story of fiat currency (inflation) as the USA b…
ytc_Ugzu-J2aq…
G
Is it just me or does this interview look AI generated? (and not well at that)…
ytc_UgwRZwziQ…
G
Can't wait for Apostate Prophet respond on AI Muslim Robot try to defend Islam a…
ytc_UgySRuHsv…
G
The moral conundrum is this : what would happen to royalties , and credits if th…
ytc_UgwuLv5jl…
G
Full Disclosure, I have an extensive legal background and consider myself an exp…
ytc_UgxNAayai…
G
“Erm actually I don’t have the money to pay for an artist” so? Make art surely i…
ytc_UgzA1621Z…
G
When A.I. begins to dumb down on intellectual information and / or conversation…
ytc_UgyJM5HAo…
Comment
If AI causes mass unemployment, who will buy the products and services companies will sell?
I have a humble idea for responsible AI. Each AI application and product can have a tag indicating the level of usefulness to humanity.
There could be several levels such as Level1 or L1 (being most positive) through L5 (being least positive). A second digit could signify the impact area such as health (let's say L1.1) and the final digit could signify the economic or social risk level. So, for example an L.1.1.1 application could be an AI application that has profound positive impacts for humankind in the area of healthcare with little economic or social risk.
Using such a system, it would be possible to come to agree on impact of AI applications, provided that an independent body of global researchers agree on the criteria to determine which tag each AI application falls into.
youtube
AI Jobs
2025-10-20T18:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxMpFxWHh8ibHyClJd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxK7g8e17Hmwgln7kd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugy5PB-fotRZafXXH2p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwgOlyjkCPAO6I9nfR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy7B067vyjyhIHQLjd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxy6KQeUJZE4Qxm8954AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy2YhZNCQyuG4VJ0Q54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyVNbN64OfArZm0eFR4AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz5w88a7pC0pxLKIuV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxFoehHjoYK2aZLv4N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]