fuzzfactor 4 days ago

I guess nobody around here has any idea so far, and its been 5 hours. Longest AI winter I've endured in days of piledriving.

Well if the Open Source Initiative didn't start until 1998, I would say it had something to do with Linux.

And what's an individual supposed to do with Linux in 1998 that would be so good?

How can users get the most out of it? Especially in the lowest-cost way? One key element to begin with was getting more (compute) performance out of obsolete hardware (which was already paid for) than could be gotten from new PC's even when they were still under warranty.

Mainly I would say "open source" is so anyone can make every bit of their very own personal computer perform at its maximum capability without needing any proprietary OS or software.

Requiring no licensing fees of any kind, ever.

Not just my exclusive opinion.

That baseline might not have ever been fully achieved, but it would be a good one to never forget.

And ideally, once the hardware is the least bit capable of running more challenging software (like AI) than previous generations, the software naturally provides the maximum performance that new hardware can achieve for the task at hand.

With no additional licensing fees of any kind.

Preferably regarding AI or anything else, so an individual stand-alone PC is capable of being simply loaded with a no-cost image that makes the most of that PC's resources without having to lean on any outside connections. Most people didn't have the internet in 1998.

Whatever level of intelligence that can be reached with any particular stand-alone hardware would always be welcome if it was without cost. And if it turns out not to be intelligent enough for a particular task, then I guess you would have to use a cluster or something which most people aren't going to have.

But if you wanted to get the most intelligence that any one PC is capable of on its own, in a completely auditable way, and the task was just too challenging for the state of hardware at the time (maybe even for a cluster), you would just have to wait. You may not even need to wait long if this approach had been in the pipeline the whole time. Incredible advances in hardware utilization are possible with most PC owners having to make no extra effort whatsoever. "Proper" AI software would simply be effective enough to do "something" useful with no deficiencies on a 1998 PC, and be smart enough to "simply" and directly become more intelligent in proportion with more powerful hardware as it arrived.

10x the hardware capability should give 10x the apparent intelligence.

I think this is now well proven by those who have more than enough financial resources but don't want to be putting effort into doing things that will actually make each PC perform with more effective intelligence on its own. Not when they can afford orders of magnitude more & powerful hardware than an average individual, to get truly improved performance from the vestigial software progress that has already been made.

All indications have pointed to far out-of-reach central servers containing most of the repositories that underlie the brains behind the "intelligence". With the power of neural-processing PCs expected to mainly use all their resources to interact with a mothership who knows best rather than think on their own

Anybody who studied neural networks in the early '90's should have been able to predict remarkable results as soon as adequate hardware were to become available. It was a no-brainer. But a potentially more intelligent approach was obviously needed, or the incredible reliability of computers in general would not be within reach of leveraging.

But by now why is it only the cumulative hardware advances that are leading to such a widely remarkable level of performance? Why not cumulative anything else? Which is one thing open source is supposed to allow for, like nothing else?

Surely I have been waiting for AI on the PC like this to become even slightly useful to me, for over 40 years, while there have been constant outstanding uses for which nothing else will do. And which have been languishing continuously since long before, seeing if it would ever come true.

I've even got one spreadsheet with an empty column since then, where the true intelligent decision-making that needs to be done at that point requires me to step into the loop in a way which nothing unauditable would ever substitute for.

Which was always a breeze for me to personally do before during and after I designed the spreadsheet to accommodate the offerings from AI software vendors in the '90's.

It's good not having any pressure to jump the shark.

After all, the money-making machine learning I had personally coded in 1980 was real promising even though it entirely focused on enhancing my natural decision-making ability like only a computer can do. It was plain to see that the best people had done so far (even with mainframes) was not actually going to cut it when it comes to the intelligent part, if there's a lot at stake, so waiting would definitely be required.

But now we're back to the mainframe model, so I guess that might be an indication that somebody needs to approach AI so differently that even a cave-PC can do it.

Still waiting.