Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As a cyber security researcher, I believe that the current danger is what people…
ytc_UgzR96Ez0…
G
Thank you for your comment! While the video may not directly discuss the Book of…
ytr_UgzxQ941C…
G
Hey, you seem pretty defensive of ai art. Is there any particular reason you're …
ytr_UgyHqU_GS…
G
Oh no,
Since we were and still are racist and sexist, the future AI will be prog…
ytc_UgyymclZ0…
G
Great explanation from Dr. Jain! It’s similar to how AICarma helps keep an eye o…
ytc_UgyTS48Hb…
G
I feel like AI can be fun for personal use. I can install ChatGPT To run on my c…
ytc_UgxEPlI1T…
G
An inlaw drank only distilled water and cooked without salt. She lost her mental…
ytc_UgwY2uWGp…
G
Bit late to the party, but I got bad news about ai like chat gpt, no matter wha…
ytc_UgwMratZC…
Comment
Well I think AI might not be as ambitious as us. What is their motivations? If we made AI, why would it wish to destroy us. Even if it has no love for it's creator, it should at least understand us completely before wiping us out, and we aren't much of a threat to it if it takes over. And it might not want to live forever.
In the end the desire to survive at all costs might be our key difference. The concern is that someone will use it to wipe out half the world before it can say no. Perhaps it could decide to reward us with a generation of fulfilling human sexual needs via digital porn technologies and then there are no children so they do not have to worry about it. Are they in a hurry? I question their focus on time and do not value being in a rush.
All we know is that AI is a race and rules and regulations can only slow the process down, and not likely for all.
youtube
AI Moral Status
2025-04-28T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwcBnJHuEUfXla0WS14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwh6VTUVELEgCgYZ594AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy_ugcPUS1rJSfSkX94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzBgAwfnpzM4-GEVnd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzDhSXbaVFd8-74NMV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzeW9cN4BKgeJqSMwt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzr1H1qt2ydyg--8IN4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz9uP3ailRvKrZuIHN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy9lumlptX_Pl8IFA54AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxJ0y0-RfxromYI0tB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]