Language is something wonderful: it allows use to formulate our opinions, describe our findings and communicate with the world. Depending on the language, your sentences can be simple to highly complex and still understandable – just look at any law texts to see how complex a language could be.
Ever tried to tell a machine to understand any text including its context? If so, the chances are high that you noticed the down side of this flexibility of natural language: we can be vague and ambiguous without even noticing it!
Let me give you a small insight of the blog post:
When humans are talking or writing, they are not as precise as you would think. Usually, this is no issue because the meaning is implied by the context and if not, the other participants will enquire to achieve understanding. Problem solved, everyone is happy, conversation may continue. Now, imagine you are reading a medical report and there is a vague formulation: What can you do? You can ask a doctor for clarification or – if you are a radiologist yourself – interpret the images. Now imagine you want to interpret these reports automatically. You develop an artificial intelligence (AI) to extract the pathologies from the reports. It works just fine, until it encounters a case where the formulation is unclear. The AI cannot ask the radiologist, and it cannot check the MRI to get “additional context” – because the result of the label extraction is provided to other AIs to learn to identify the MRI images. This is a little bit of a chicken-egg issue.
Read the full blog post on the ScanDiags webpage.