Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Now think of it from the other perspective. Imagine convincing your fellow count…
rdc_dwwazow
G
Great interview for perfect insights in to the role of AI in radiology as in 202…
ytc_UgzAqQGZd…
G
The assumption behind this analysis seems to be that there is in fact a "mind" b…
ytc_Ugyx1P7U2…
G
I just need my gutters cleaned out & leaves raked...will AI replace the kids dow…
ytc_UgzYAH9J8…
G
@RaaynML Ok but using ai while still having hands and feet is like using the cru…
ytr_UgwLTl5kg…
G
Man people really misunderstand what chatgpt is. Its made by humans and learned …
ytc_Ugzrixjp-…
G
Everything these idiots say is BS and they think it’s funny all of these AI idea…
ytc_UgwDDa53z…
G
Interesting interview. We just discussed responsible use of AI, specifically at …
ytc_UgwOfGJT7…
Comment
I wish I remembered the scientist's name, but I listened to this one scientist who explained the limits of current Ai. Right now ai is only as good as the amount of data we feed it and it learns from. What it cannot overcome at least in its current state is generalizability. it doesn't know how to generalize information. The only reason why it's getting supposedly smarter is because we are feeding it more data more and more data. That's like teaching it here's a scenario A and B and C and D and E... Right now we just basically teach it all kinds of different scenarios and so the more scenarios it knows the smarter it appears. but it doesn't know how to generalize like humans do
youtube
AI Governance
2024-04-16T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxVRK18iuVr4K-2Nr94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyp4eX_wA4fQhYL6z14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwVf88lFT0llSRa-T54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwxP5NBWzcjFRB80pZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxPBCfzN1QllEn0K9R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxIGntrzPkWFKCkXrh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyaB0iBBinlwJ64IIZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwPMsHRyrV-MsYpy2d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxKFRBeRrs6Eyu1Yyl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxNj4WoZ3XkAwtDXnh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}
]