It's finally time to discuss the main highlights of #CASP15's assembly category, where we had these beatiful 41 targets:
Compared to #CASP14, in #CASP15, we have a significant increase in the number of targets offered and in the number of groups participating the challange:
Over these targets, community produced extremely good results, when we consider the interface patch (IPS) and interface contact (ICS) scores. In #CASP15, for almost 50% of the cases, there is at least one high accuracy model generated (ICS>=0.8), which was only 7% in #CASP14.
This significant improvement is also reflected into the difficulty comparison performed over all multimeric rounds, when ICS and the shape metric (TM) is considered 👇
Which groups were generating these excellent results? If we look at the top 5, we will see Zheng, Venclovas, Wallner, and Yang, all using an improved version of #AF2-multimer:
It's also striking to see that only for 44% of the cases naive AF2-multimer produced the same outcome as its modified version:
We now know that naive AF2-multimer, run by @arneelof group, did fail in four specific categories, where several improved AF2-m versions prevailed.
In the end, there is no single method that works well on the "improved cases". But now we know that the following keywords matter in generating significantly improved multimeric models than the standard AF2-m release:
So, the significant assembly pred. improvement we witness in #CASP15 is a reflection of the tertiary structure pred. revolution observed in #CASP14. Even so, smart tweaks performed by several academic and industry groups was needed to push the multimeric modeling limits further.
Now, it is time to dissect what worked well for which reason, so that we can come up with a consensus method, that will perform consistently better for different classes of targets.