> MP> Automated, repetitious processes are one thing. AI is supposed to
> MP> include a decision making element. Like the driverless cars you
> MP> mentioned. They must be able to decide whether or not to stop at a red
> MP> light, or before hitting a jaywalker. A simple automated process would
> MP> just run the pedestrian over, while AI automation should stop until the
> MP> unexpected obstacle has cleared its path.
> I have an online form with an action page that checks to make sure that the
> form was filled out properly before accepting it. The action page makes
> decisions and acts accordingly. Of course this is not as useful as robot
> assassins, but I see no difference in the amount of "AI." Am I missing
> something?
I have one of those on my old c1994 GT Power BBS. Based on answers the
user gives, it will react a little differently.
IMHO, that is a form of AI, and would have been what was somewhat commonly
available back in the 1990s. When these folks are talking AI now, they are
talking about systems that would make much more complex decisions.
When IBM introduced Watson to the world a few years ago, IMHO, that was
some pretty top of the line (at the time) AI. I suspect that thought leaders
like Musk, Zuckerburg, the folks at Google, etc., are expecting the AI of
today to make Watson look rather simple.
If you are intereacting with Google at all these days, you are probably
receiving an AI answer at the top of many of your results pages.
With a lot of business leaders, I do believe you are correct that it is a
buzzword, though. Sort of like the initial "dot.com" boom and bust in the
late 1990s, a lot of company leaders are wanting to be on the AI train,
even though I suspect that many/most of them don't really understand where
the train might go.
* SLMR 2.1a * Nothing is so smiple that it can't get screwed up.
--- SBBSecho 3.20-Linux
* Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
|