Agreed you will likely need to revert to vSS or otherwise move at least one pNIC into a vSS. Ideally you will want to evacuate the ESXi host that this maintenance will be performed on (not too easy with no vCenter). Although I have done several of these in which the VMs running on the host could experience no downtime and have been successful in simply removing one of my 10Gb NICs from the vDS (via "Home > Inventory > Networking" or "Host > Configuration > Networking" page) and adding it to a vSS long enough to bring the desired vCenter and VCDB online. YMMV vary with routing loops depending on network configuration. You may also consider unchecking the HA feature "Enable Host Monitoring" on the Cluster for which the host you are working on lives. This is to prevent false isolation events while removing/adding pNICs.
Alternatively, if you have a spare NIC port on the ESXi host that is not used (i.e. Gb copper) you may consider having the network team configure the desired vcenter vlan on that switch port and plug that into your host temporarily as another way to get the vCenter online using vSS.
Another option is restoring the vCenter VM's vmx from backup, re-registering the VM and saying "I Moved it", etc. This approach is unlikely to fix the problem but there is a chance it will work if you simply cannot get a new dvPort assigned.
If you are using 1000v for your vDS you can try switching the vNIC to quarantine then back to the desired port group. That will typically fix 'invalid device backing' errors if you are seeing that. Also, the 1000v may self repair in certain chicken and egg scenarios if you just leave them be for an hour.