I've thought about this quite a lot. I'd have to argue that no AI is sentient, but simply imitating what they've been taught is sentience.
Here's another example which may be easier to figure out than the sentience one
A man is taken as a Prisoner or War to let's say, uh, Korea. This man is English and such does not understand Korean. This man is provided with a book in his cell. It contains Korean phrases, and Korean responses, with no English. When someone reads a phrase from the book, the man must respond, in Korean, with the appropriate response. Given enough time, the man wouldn't need the book, being able to perfectly respond to any phrase given despite not knowing what he's saying.
Korean people speaking to him would get full Korean responses, and as such believe he is fluent despite him not having any translation of what he's saying and therefore no understanding of the conversation.
This is similar to how an AI would function. The AI could figure out the correct responses to prompts through trial and error, and as such, could appear to be sentient. Given enough time any artificial intelligence could perfectly mimic actual sentience and as such pass all tests and be declared as such, without actually ever having been sentient.
Another argument could be that the AI was sentient from the moment it was created, however, the sentient beings that made it likely coded in what they perceive as sentience. As such the AI would appear to be sentient however it would only be following it's coding, which again, only makes it appear to be sentient, not actually so.
I don't believe and artificial intelligence, IPC or otherwise, could actually be fully sentient. Can they have a watered-down version built by code and copied behaviour, which they believe to be the full thing? Absolutely. Can they achieve the same level of sentience as a Vulpkanin or Human? Probably not.