Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
en s'onfout seule le créateur qui as le dernier mot c'est celui qui as créé ce m…
ytc_Ugynw0IcW…
G
Uber Driver: A worse taxi driver
Air B&B Host: A worse B&B
Youtuber: A worse TV …
ytc_UgygjJ-g1…
G
Feels like public school kids are tougher and more resilient compared to to thes…
ytc_Ugx_vmIRa…
G
I just wanted to say I think you are a good person for diligently helping your p…
rdc_d7kwryp
G
I think self driving cars will 'help' by allowing for increased density. That in…
ytc_UgwpmBeDX…
G
I would be interested to see Dr.Tommy thoughts. With you both being at differen…
ytc_UgwJtC9Ea…
G
When I use Chatgpt, I was so easy to get mad because of the chatgpt response loo…
ytc_UgwZnaVxM…
G
Jo Kelly this is soft AI it won't have a personality. Yes bugs do occur but you …
ytr_Ugjiws5jv…
Comment
I really want AGI to happen, unfortunately it will not in all current approaches. Put simply, AGI will never create anything new, it can only do the limit of what humans have done at a very fast speed, nothing more. We train AI based on information created by humans with many flawed perceptions of reality and the arrogance to believe that modeling the human cortex (Which humans still do not understand enough for this task in the first place) will magically create new creative aspects beyond existing human cognitive capabilities, clearly shows our ignorance. This perspective is flawed because humans do not understand how to make any substrate become self-aware or understand how (If at all) humans are self-aware. All we are doing is rebranding self-assembling algorithms and weighted decoders with the buzzword "AGI" to pull investment capital into these efforts. Anyone who works in AI will tell you all models hit a wall in their intended capabilities (AKA Chasing the nines). This means you may get a transform or other methodology to reach 99.9999% accuracy, but it will inevitably never get any better than the system's maximum. This means, after billions of dollars spent to get to 98.7% accuracy and then you hit the wall, you need to rebuild and re-invest billions more to try again and hit the architectural lottery and pray you get beyond 98.7% with the next method. When in reality, if a group of talented engineers start from scratch and build something that does not depend on building its self (because people are too lazy to innovate it themselves today) and they understand how it works instead of it being a roulette black box, the method will likely exceed the AGI method in fewer, less costly, tries. The biggest problem with these AI scientists is they are human, so they see patterns of intelligence and outcomes of these models and ASSUME this is equal to the human experience of the same; this is an unacceptable blind-spot within this research. Humans are not capable of validating AGI and it's capabilities objectively because they are not able to experience what it is to be an AGI (If that is even possible in the first place). Silicon chips and electronics do not feel anything - that requires biology, hormones and various other elements humans don't understand enough. This whole AGI thing is overblown and a waste of money. I really wish people were smart enough to create true AGI, but humans lack most of the information to do it. When all of these companies fail to produce on their AGI promises, our society will be broke and need to start hiring humans to do the work again, but unfortunately, it will be too late by then as not enough people will be left to take over for our failed AGI systems. Over 60 years of this crap and I am thoroughly unimpressed by every aspect and result of this field - it's really pathetic. I look forward to the day when humans start doing smart things again with their time and attention.
youtube
AI Governance
2023-12-31T09:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyBc4mWxSvWCj5IbcJ4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzVcl3OmKPE5DgxxLN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwu_1_WJzISL3kaUs54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyH7sk68u76-XnRwLt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzjddhKlEEwYF_-E5N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyvDLo2g6FAW2nlg7h4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwiFLr2eg6cbnK49U54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugydeh-pmtrSSKcwPoN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyTlly0Io4lhvxjdNB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwX4HZ5lHV__FJWET54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]