The congresswoman did a good job bringing it up but I honestly think that even if Google provided a stack of pages of actual code detailing the exact algorithm at play and stepped through it line by line in a test scenario there'd still be millions of dipshits believing in a little man behind the scenes dictating the exact results of every permutation of every single google search in a grand conspiracy to "make" Republicans look bad.
This is where I jump in and say it's not that simple. In your scenario there is a series of "code", computer instructions, that could be audited to see if they produce unfair outcomes.
In modern artificial intelligence, the most broadly applicable results for a given investment are instead achieved by taking a very general model for decision-making (a neural network for instance, though my argument here is applicable to a much broader range of machine learning) and training it on a selected subset of data.
Now there are many ways in which this model will tend to reproduce the bias of its inputs. And we should all be concerned about this, not just conservatives.
Suppose we trained that model on police searches of automobiles. Suppose it tended to give a high score to a certain range of skin tints as weighting for likelihood of carrying drugs. So the model is neutral, right? No, not necessarily. Somebody chose to conduct those searches, and they had to conduct them within the constraints of local interpretation of complex search and seizure laws, and probably also hung out in areas where people of a certain range of skin tints tended to hang out.
The bias isn't in the algorithm, it's in the data.