That is a very good question. Wikidata is certainly not at that size yet, so how that performs at that scale is still anyone’s best guess. Wikidata (or Wikibase) is better suited for crowdsourcing or a user platform to edit and less as a platform to host linked data just as is. This user-role of Wikidata comes with a price. Data in Wikidata comes with redundancy. Wikidata items are stored as blobs in a relational database, which in turn is copied into an RDF structure to store in Blazegraph and within this RDF structure, there is again a redundancy created between the full statements and the truthy-statements.
Occurrence records already exist so we could argue that this user-interface role is less of a requirement so a core RDF store (e.g. GraphDB, Virtuoso, Stardog, etc) might be more in place here. WIkidata and other Wikibase systems could treat this core triple store as a backbone.
The question now is if a core RDF store can host 1.8B records. Resources like UniProt suggests so.