pull down to refresh
@benthecarman
740,298 sats stacked
stacking since: #4964longest cowboy streak: 182 verified stacker.news contributor
443 sats \ 0 replies \ @benthecarman 28 Oct \ on: On Ossification bitcoin
Saying we don't need covenants because we can do them on a layer 2 is a gross misunderstanding of the entire point of covenants.
Covenants main benefit (in my opinion) is to enable multi party protocols / layer 2s. Without them we can only build 2 party protocols (lightning) without putting huge complexity and availability requirements that defeat the whole purpose.
This argument comes from the block size wars where it actually made sense, we don't want every transaction to happen on-chain, we want them on higher layers. Advocating for covenants is actually a continuation of this, we don't want to do an on-chain transaction to on-board every user onto bitcoin, ideally we can group hundreds to thousands of users into a single utxo and have them be onboarded in an infinitely cheaper way.
We can't do covenant functionality on a higher layer because we need covenants to be able to build the higher layers.
One day in 5th grade everyone called each other by their name backwards. Mine was the only one that suck and I was neb for the rest of the year, my teacher even called me it once
163 sats \ 4 replies \ @benthecarman 2 Sep \ parent \ on: Soft-Fork/Covenant Dependent Layer 2 Review bitcoin
This is only for the outputs of the transaction. Would actually be much smaller in reality
105 sats \ 1 reply \ @benthecarman 22 Aug \ parent \ on: High Feees on Mempool.space right now bitcoin
254 sats \ 5 replies \ @benthecarman 22 Aug \ parent \ on: High Feees on Mempool.space right now bitcoin
This wasn't us lol
I believe this is out of date. There was a research paper published today about how jwst data can be used to correct it.
Here's a video about the paper before it was released
[alias] rb = rebase -S co = checkout ci = commit -S ic = commit -S cp = cherry-pick -S br = branch st = status lg = log --graph --format='%C(yellow)%h%Creset -%C(auto)%d%Creset %s %C(green)(%ar) %C(cyan)<%an>%Creset' branches = branch -a desc = describe last = log -1 HEAD pom = pull origin master remotes = remote -v tags = tag -l unstage = reset HEAD -- ft = fetch --all rs1 = reset --soft HEAD~1 rs2 = reset --soft HEAD~2 rs3 = reset --soft HEAD~3 rs4 = reset --soft HEAD~4 rs5 = reset --soft HEAD~5 rs6 = reset --soft HEAD~6 rs7 = reset --soft HEAD~7 rs8 = reset --soft HEAD~8 rs9 = reset --soft HEAD~9 rh = reset --hard rh1 = reset --hard HEAD~1 rh2 = reset --hard HEAD~2 rh3 = reset --hard HEAD~3 rh4 = reset --hard HEAD~4 rh5 = reset --hard HEAD~5 rh6 = reset --hard HEAD~6 rh7 = reset --hard HEAD~7 rh8 = reset --hard HEAD~8 rh9 = reset --hard HEAD~9 df = diff --compact-summary master oc = checkout ps = push psh = push phs = push psuh = push phus = push phsu = push puhs = push push-f = push -f puhs-f = push-f push0f = push -f puhs0f = push -f add-p = add -p ds = diff --staged
also have this to pull in the lastest version of a branch from upstream
function update() { git fetch --multiple upstream origin if [ -n "$1" ] then git checkout $1 git merge remotes/upstream/$1 else git checkout master git merge remotes/upstream/master fi git push -f }
there's just not enough content. I can brainrot scroll twitter for hours and never run out of stuff. i run out of things on nostr in 20 mins
doing batches of 10 instead of 1000 seems like it'd be a huge hit to performance, is this just because bandwidth? Do the headers still come in batches of 1000, I imagine there isn't as much concern for headers because they're only 80 bytes