Getting a lot of interest from people wanting to hear "@props, how did you do that?" re: @Waxbones' @Prometheus_Lab real-time random trait rolling and on-the-fly generation. So here we go ๐
I'll be referencing this high level diagram to unpack each step. Some jargon may be technical but will try to keep it simple (comment with any questions!):
1/11: @props Super Structure:
We recently deployed our upgradable on-chain structure that provides us the ability to deploy contracts (via proxy) and then manage them through a powerful admin tool (not public, yet?).
The Props Platform has allowed us to start modularizing many of our capabilities w/ one-click deployment (721/A, 1155, Comics, Albums, 10Ks, etc), and between 50-100x gas savings for contract creation.
It provides an on-chain access control layer so *many* people on a team with different roles can maintain a project (which can have many contracts) rather than just one dev with a single wallet using a point-solution to manage a single contract. ๐ช
***We're currently running Props in "Pixar Mode"***
IE, like Pixar's model, we've built amazing sick tech, but for now we're using it to incubate projects internally. We're kicking around some ideas for the future however.
The Props Platform also has (imho) the most flexible and powerful allowlist logic on the planet, providing us an ability to have multiple allowlists mint simultaneously and for individuals to mint their allocations (w/ variable pricing) across those lists in one single cart.
2/11: The Prometheus Lab contract (721A). Gas efficient. Our contracts register themselves with the Props Registries (contract and access) which make them operable from the platform UI
3/11: The sick af frontend developed by the ๐ @calvinhoenes (built in @sveltejs). This project actually uses 2 generation algos, one on the front end (Calvin), one on the backend (Me). I got the easy side, he also had to design UX allowing users to re-roll traits and make it FUN
When you're happy with your trait rolls, those trait combos are sent to the mint() function in the contract as calldata, for the sole purpose of emitting them.
4-5/11: We then sync those emitted events to a @MoralisWeb3 DB.
6-7/11: A background process on our infrastructure subscribes to new records being written to the DB, and when it sees one, it sends a new task to a Message Queue. A message queue stores work to be done (like generation) and provides a ton of resiliency...
we care not if a generation task might fail due to a server running out of memory or a burst of user activity minting or whatevs. The tasks are durable and only removed from the queue when completed by a worker. Workers are free to fail, as another steps up to take the task.
8/11: We multi-thread the workers using Throng for stupid horsepower across the infra and using all cores available. A consumer process listens for work in real-time from the message queue and executes generation when it sees a new task, removing it from the queue on success.
9/11: The generation process is actually pretty simple logic. Since the trait combos were emitted and logged, we just break them apart and then fetch the trait asset corresponding to each, layering them on top of each other using canvas.
We then generate the metadata reflecting those selections and upload the generated image to IPFS, inserting the CID into the metadata's image attribute. This metadata gets stored in the DB by tokenID. From confirmed mint txn to completed token, this process takes 3-5 seconds.
10/11: Our database of choice is Postgres. We wrap our PG databases with Hasura which provides us an instant graphql API server and record streaming which negates the need for ORMs and maintaining schemas outside of the DB itself (pretty f'n cool)
11/11: The Front-End has a public facing API that consumes data from the DB, caching and serving results by tokenID using Redis for snappy response times on tokens that have been fetched before (Redis not on diagram my b)
12/11: The API acts as the BASEURI in the contract and gives us the speed to instant-reveal user generated NFTs... until we mint out all 3,333 units at 1PM EST today on prometheuslab.xyz, at which time all of the token metadata will be sent to IPFS and the BASEURI frozen
Soooo yeah, that's pretty much it. If you made it this far, #props to you.
Bull or bear we persevere and continue building. We're honored for this opportunity.
How to support Props? Support the projects we help. These amazing souls are the ones creating value in Web3 ๐
โข โข โข
Missing some Tweet in this thread? You can try to
force a refresh
First, we used the Moralis.io NFT Owner API to create a snapshot of Lab Access Pass holders on the evening of Feb 13th. We then shuffled that list using a Fisher-Yates shuffle, and stored it on IPFS, locking it's contents:
Can't wait for all the analytics to compile from the @WoodiesNFT public sale today. But holy cow, the system we built far exceeded my expectations ๐งต
We were handling a lot of traffic at once, over 4k unique visitors simultaneously at the peak (snapshot from just after that)
Average response time from client request to server response was 83ms, serving 100 requests per second at the peak. Time to first byte on 3G was 0.8s! Total memory usage was 40MB constant.
Interesting insights from ~$500,000 of @woodiesnft whitelist pre-sales as we gear up for public passports tomorrow at 9a PDT / 12p EST on WoodiesNFT.com ๐งต
First thing that jumps out at me is that our community is global. The US, Canada, UK, Australia, Germany, France, Netherlands, India, Brazil, Malaysia, Spain, Indonesia, Singapore, Turkey, Austria, China, Mexico, Russia, Thailand, Italy, Philippines, Japan, Portugal....
Switzerland, New Zealand, Sweden, Czechia, Belgium, Hong Kong, Argentina, Poland, Colombia, Ireland, Ukraine, Denmark, United Arab Emirates, Norway, South Africa, Romania, Taiwan, Hungary, South Korea, Bulgaria, Croatia, Finland, Vietnam, Pakistan, Saudi Arabia, Serbia, Estonia..
Today is a big day in many respects - The @Ultra_DAO team just deployed our first contract to power the @WoodiesNFT universe, and for me, the culmination of many weeks of 3am goodnights and 7am good mornings. But, this is just the beginning. We're just getting warmed up.
Iโd also like to give a big shout out and thank you to @HardhatHQ, @OpenZeppelin and all the other teams building the foundational frameworks for all of this.
Without these two, there is absolutely ZERO chance we could have pulled off what we did in a month. I just donmt see how this can be done without @HardhatHQ tests and deployment scripts / automation. No way.