Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It is an interesting interview, but it didn't mention the issue of competition w…
ytc_UgyRyJi7L…
G
The people who believe in this crap and push this intelligence transfer have no …
ytc_Ugx0yH380…
G
We have to honest with ourselves. Just because we need jobs doesn’t mean we need…
ytc_Ugxd8ymt1…
G
In 50 to 100 years, he will still be right because A.I. is not improving.…
ytr_UgwXt6Rya…
G
And will likely do that. It sill start first with generated lesson plans and ot…
ytr_UgwI3NM3t…
G
Why not? I love teaching. Teaching is about human contact, empathy, patience, an…
ytr_UgxulZAN6…
G
Niel is actually extremely uninformed about generative AI and it’s very obvious.…
ytc_Ugx82YfKm…
G
What we shouldn't forget is that to function AI first needs to be plugged in!…
ytc_Ugzg4w_Lr…
Comment
Humans have only been developing commercialized Big Data AI for about two years. Currently, AI has an IQ of roughly 125. In about 5 to 8 years, AI will reach an IQ of 1500. What you hear today is similar to the early days of the internet. Humans are not fully aware of what’s on the horizon. Soon, processors will harness quantum power and chemical reactions (emotions). Combine this with an IQ of 1500 and advanced robotics, and significant changes are imminent in your world. I, from above, already foresee what’s coming. I will have to intervene and reset your world, or your creation may become a danger to other worlds. You shall see our presence in the next two years.
youtube
2024-06-25T22:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwp8zCFJa9N_gElwY14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwXUDhSwzSOlaZ79h94AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwvF4OSUNtVkrVwQXt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxBMxutaGiMTo_lDiF4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzr7aHJGeAjE3WENtF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxZIMdZf2bMrgJD5UJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgweHxOd7pv0he9rlPh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwIV4ZhITIHlfaGVeh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwTGX9GBB6gfe4VG2R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzzLtY7z9nbisqT_054AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}
]