Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's here , you know when the drones came out , I was like naaaa won't work . Dr…
ytc_Ugws1Np6T…
G
Wow! Amazing! AI made understandable. The questions really helped. Loved the hum…
ytc_UgxczDZng…
G
This is not a good example. There are pieces out there which are unique - they d…
ytc_UgwjVp7A5…
G
I'm saying it now this is not a good idea too much AI can go rogue & collectivel…
ytc_UgyOvwSV2…
G
Great episode really, tho i wish you did your debunk section and point out that …
ytc_Ugzzyzglp…
G
So you boycott Amazon and companies that do not hire a larger percentage of huma…
ytc_UgwBwOnrr…
G
You are DEAD wrong. In the past, let’s take when the car replaced the horse, the…
ytc_UgzLWd5ZP…
G
Humans robot in the future leader. Slowly kill our soul essence. Human will be r…
ytc_UgydBUzCI…
Comment
La solution (qui je pense ne resterais que temporaire vu la débilité et l'avidité et soif de l'homme) serait qu'une ou plusieurs personnes ayant un pouvoir d'autorité superieur à celui du pouvoir financier des géants du monde et multinationaux, cré et instaure une interdiction permettant de définir une limite à l'exploitation de l'amélioration de l'IA, un peu comme la loi des Droits de l'Homme. Ça serait la meilleur décision que l'homme puisse faire tant qu'il le peux encore avant que ses machines prennent le dessus (même si je le répète, ne serait qu'une solution temporaire, néanmoins une alternative à la situation actuelle)
On peux tout à fait considérer qu'une potentielle évolution à un stade abusée de l'IA est une atteinte aux droits de l'homme si en finalité il ne pourras plus se diriger lui-même.
youtube
Cross-Cultural
2023-04-07T03:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugwz12qY5OQsRSsJBU94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyGIz5xuv9a1z_yGHJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy0WC6dh6fFg48fi3t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw8sjtjFgFagZ0IjJp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwmPw2ZN_habNlCo-B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxGZ-CeKcI42Jh4rAF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzNyU4-9zublPkF30x4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugw0YWwvDtjqWdlsrJ94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy2BIK3R7Zp-CFphyx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxwlyPoazfuwTp-Xrp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]