Meta’s own research (and its use of it) has shown that it repeatedly ignores well-substantiated facts about the harms of its products. Now that Section 230 seems like a flawed shield, I fear the takeaway for other companies will be: never do the research in the first place to preserve plausible deniability.
Meta has always wanted the appearance of caring about safety (and that helps them attract talent), while nearly always prioritizing growth (save for tiny blips of time, like in 2017 when the fallout of the cambridge analytica stuff was hitting a crescendo) whereas companies like X are run by people explicitly disinterested in putting significant resources into safety, especially research.
I will also add that, for the past few years, Meta and X both have become extremely hostile to allowing external research of their platforms, shutting down tools and access to data streams.