Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't think AI is going to replace jobs and be used by big companies. The art …
ytc_Ugzfr4ccP…
G
If that was me and I saw that second robot side eye me it’s going down…
ytc_UgzCVnwKA…
G
How many people have artistic "intentions" but can never be artists simply becau…
ytc_UgwYsH0ZT…
G
This is just blatantly untrue, global ai usage uses about as much water as one g…
ytc_Ugym2IQHH…
G
They expect generative AI to be able to eventually remove the need for instituti…
rdc_n9w3v9v
G
I met Ben recently at a tech investment conference and he was kind enough to eng…
ytc_UgwvkYkrL…
G
A lot of academics will happily answer questions if you ask them, and i wager th…
ytc_Ugxea3K4z…
G
ik this is a bad thing but chatgpt is my bestfriend..cause i have no friends.…
ytc_UgzAqRJ5v…
Comment
Big fan of Yuval, but there is some misrepresentation in the way he speaks about how AI would lie from watching humans lie. This is not how AI training/learning works... at all.
If an LLM gets trained with curated data whereby outputs are honest to their inputs the AI will in no way learn to lie by itself. Not specially if a large enough set of cases are represented in the dataset. And specially not if the inputs in the data contemplate the case of observing users/humans lie, and then maintaining a honest attitude. If RL is done strictly in the same way, this, again, will have no chance of happening.
Indeed he has proven himself not to be an expert in AI, as anyone who works with DL will instantly rebate that claim.
This would only be possible in an active learning AI, and all labs have been very reluctant of implementing active learning, specially due to lack of performance. Frozen state AI is the norm for now.
youtube
Viral AI Reaction
2025-06-21T17:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwMjsmGswYL6LI_mEx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyhgJpQWxSHpTALsBV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzbiFb7giNduegeV3J4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwWH9S7Oa7JMdgznoB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwitQSOYF8xWUbW0HV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"mixed"},
{"id":"ytc_Ugw3KFmBdAWf_cTlNTt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx_J7i_Vh5miuP0b7J4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugyo35uvgF9hF8dNSCJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzuBcMqn7Z9JEY_z2J4AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwTuEXQjnMWWt2k-XZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}
]