Well, yes. Basically certain types of decision making logic trees are AI and other ones are some dude writing code to "shake things up for the audience". I don't really know enough to give any concrete "on the left out this line is, in the right isn't" especially because we have evolved a lot of new thinking on the matter.
From a philosophical viewpoint and without getting into too much nonsensical metaphysics, I would paint a minimalist AI as any distinct analysis done by a machine whereby the machine can take the same input and come up with a different output solely by virtue of its processing of existing output (data or information).
That is to say a program which does A then B because simply 33% of the time it chooses B based on an RNG algorithm is not an AI. A program that chooses A then B by virtue of being programmed to evaluate the nature of A's results thus deciding B where B is some sort of heuristically defined 'improvement' towards the purpose of the machine. I use quotes because you could program a machine to make WORSE and worse decisions if you wanted to but you get my point.
Keep in mind this doesn't necessarily make it sentient or even expect that THAT machine had experiences, just that it can evaluate and make decisions based on pre-existing data.
For instance a robot that makes panels for cars- if it has 99.5% prefect panels because .5% just randomly come out bad it's just a robot after an engineer calculates to move one roller to improve success rate. If it has 99.5% panels stock from the factory but after experiencing panels the machine itself calculates the complex dataset to move one roller .1mm to reduce error rate to .3% and then has a 99.7% success rate, this is a very narrow but still a decision making AI.