Diff Adjustment (Potential Design/Tradeoffs)

I’m not a mathmetician and there are people on here that are infinitly more clever than I am but I’ll give it a go…

Could the difficulty logic be amended to use a flexible number of blocks if block time goes above a certain time?

Target the block time to 2 minutes as today and have the epoch length at 1024.

BUT, if block time is more than >4 minutes the difficulty epoch length changes…

If block time is >4 minutes last 1024 blocks then new difficuty wil be calculated every 720 blocks
If block time is >8 minutes last 720 blocks then difficulty will be calculated every 360 blocks
If block time is >16 minute last 360 blocks then difficulty will be calculated every 160 blocks
And so on…I’m sure there’s even a formula that could be used to make the change even smoother than set times or # of blocks.

IF we get a sudden spike of hashrate the blocktime pain should be a lot less. Of course, it won’t prevent coin hopping, but people will do that regardless.

I’m sure this proposal has some flaws, but i throw my hat into the mix :slight_smile:

4 Likes

it’s + - what i said 3 hours ago lol

1 Like

I read yours but I understood that to rerun the difficulty check at the point it’s more than 4 minutes, e.g. cap it at 4 minutes.

I was thinking more a scaled approach which alters the difficulty epoch length based on the block time for the last X (variable) blocks.

2 Likes

Pretty new and dont know many proper mining terms etc here so take it easy on me.
Can the Ergo devs not do something like ScaleHashing?
New word definition, yes i just made it up.
Maybe theres something like this out there already but heres what i was thinking.

Example:
Implement an average hash limit increase for all individual miners of say… 500MH per 24 hr period. Could keep track via ip or addresses or accounts or something.
So, for any huge mining operations, they can only add 500MH to the network, would have to wait 24hr until they could add another maximum of 500MH to the network.
So itd be harder or hopefully impossible for any mining op to slap 40 GH or whatever on the network and jump off the same day.

Im crap at explaining, but i hope people understand.
Would something like this be viable and effective in preventing or at the very least extremely reducing the current issue ERG is facing?
Does something like this exist?
Would this negatively impact erg supporters in anyways?

I dont know, any constructive criticism is welcome.
Just a random lunch break thought i had.

Keep on Keepin’ on.

1 Like

How about making ergo a multi algo blockchain? We have a GPU algo, but why not also have CPUs secure the network?

Pros:
Big gpu farms wont be able to manipulate.
For times like now, blocks found on cpus will move the epoch forward without impacting gpu mining difficulty.
since the daa is epoch based, it shouldnt matter if blocks are found on gpus or cpus.
DAA wouldnt need to change.
Will give the cpu mining crowd exposure to ergo and all it has to offer.

Cons:
Crypto jackers can take advantage of the new algo on vps potentially.
Some portion of rewards (half?) would be taken from gpu miners.
Would require a hard fork.

Overall i think pros outweigh the cons but im no expert and i could be dead wrong on this. What does everyone think? Did i miss any pros/cons?

This seems the right direction to me^^

2 Likes

Problem I’ve seen with cpu miners, they tend to flock to one pool & you end up having over 50% hash rate on a single pool.

Hm, this gives me socialist vibes.

Does adjusting difficulty more frequently open us up to malicious network interference?

Mining is unrestricted & permissionless with Ergo on vast majority/possibly all of the Ergo mining pools.

Meaning miners don’t have to create an account for mining pool participation, no KYC requirements, and IP addresses can easily be routed and changed through VPN proxy.

This idea would require pools developer a way to label/authenticate, and track individual miners. I know IP tracking is used to change payout thresholds on many pools, but IP addresses can be easily changed/rerouted (VPN) or camouflaged (Tor), both of which are easily accessible for most people.

Even if such tracking was possible to enforce, it goes against the spirit of PoW mining and Ergo’s ethos, which emphasize individual rights to privacy, with a special emphasis on financial privacy.

And even if you could restrict large changes in hashrate at the individual level (efficiently and without causing much controversy), it wouldn’t do anything to limit changes in hashrate during rare events like the mERGe- where majority of hashrate change comes from groups of miners flocking to or from Ergo.

I also think trying to control hashrate by restricting miners is the wrong way to do it, even if you could. The protocol needs to be able to adjust to external forces, not try to control external forces, which at their core are generally uncontrollable by nature.

You could look into bonded mining for a DAA that is somewhat similar. But instead of limiting hashrate increases by individuals, bonded mining penalizes miners who don’t remain loyal and committed to coin.

Scholarly paper that introduced the concept of bonded mining: [1907.00302] Bonded Mining: Difficulty Adjustment by Miner Commitment

We present Bonded Mining, a proactive DAA that works by collecting hash rate commitments secured by bond from miners. The difficulty is set directly from the commitments and the bond is used to penalize miners who deviate from their commitment.

Would it be worthwhile to consider a L2 with similarities to the Lightning Network before putting a lot of time and effort into a protocol change and a Hardfork?

Would seem to me that if a Lightning Network could be introduced instead of a change as a knee jerk reaction to difficulty its possible that such a L2 could benefit miners if a L2 was built as a sidechain and required some standard of mining to ensure the security, with batching possible a Lightning like network could possibly batch transactions before moving them to the Ergo chain which would make the financial side of things make sense.

I have not really seen anyone looking at utilizing a sidechain or L2 solution concerning the merge migration of hashrate and the problems it has exposed, however, it seems that maybe we could look at that so that we can satisfy miners, should at least be a part of the discussion before a fork is considered imo, would be good to look at all available options before going through a hardfork without considering them.

2 Likes

This is the entire issue right here I believe. We aren’t referencing a target block time with our model. Because we try to improve on Bitcoins DAA model.

GPU minable altcoins shouldn’t model their DAA off Bitcoin’s DAA. Even, if we didn’t model after it, we shouldn’t compare our replacement DAA against Bitcoin’s. Bitcoin is its own beast with its own assumptions.

No other PoW chain is comparable to Bitcoin in this regard. Satoshi was very lucky, because that DAA model is a less than ideal choice for most PoW altcoins Not saying it couldn’t work good enough, just that there’s better options. And if you’re a bitcoin fork or a GPU dominated chain like Ergo, the Bitcoin DAA is a bad model to use as the benchmark.

Here are the basic pseudocode for well documented alternative DAAs that have significantly lower error rate % for popular attacks like selfish mining and general timestamp attacks: You can look at asert results on BCH and see even it results in high block times in some cases, but nothing like our current situation.
Edit: BCH needs to handle significant hashrate flucutations since it’s a fork of BTC-sharing the same SHA-256 hashing scheme.

wtema DAA

next_target = prior_target / (IDEAL_BLOCK_TIME * alpha_recip)
next_target *= block_time + IDEAL_BLOCK_TIME * (alpha_recip - 1)

asert DAA

target_timestamp = (height + 1 - activation_height)*IDEAL_BLOCK_TIME
next_target = target_at_activation * 2^((last_timestamp − target_timestamp)/tau)

Both reference target times. If we don’t reference target times, we need some sort of failsafe that if block isn’t solved in X seconds that difficulty decreases X%.

DISCLAIMER: This is all just initial opinion based on reading some things, I couldn’t even tell you what the DAA math means even in the pseudocode versions I copied and pasted. I didn’t even know pseudocode was a word until 2 days ago. I’m just sharing my current view, my uneducated perspective formed over a few days, based on google search results. Remember I didn’t even know pseudocode was a word.

Please tell me where I’m wrong in my general thought process and/ or provide more links for me to read :grimacing:

2 Likes

It’s a very informative input, thank you.
However, could someone able write some examples (with example values, to ensure the reader which parameters are being discussed) about adjustments made using the wtema & asert DAAs? Thank you! And the mentioned failsafe sounds great, I think that’s really important for black swan events.

1 Like

Please everybody, check out my comment on the Hard Fork wishlist thread. One thing I bring up (foundational change activation period) is an area we can (should IMHO) modify when changing DAA with a hard fork.

2 Likes

This would be amazing! I would recommend this link to read about ASERT and the results on BCH. The graphs may help visualize results, and there’s good commentary that even low-level plebs like myself can understand, at least to some degree.

1 Like

Hello all,

I share my quick thinking about the problem just in case it can help or inspire a new Ergo own way to manage things.
Forgive my bad English if it is not clean :hugs:

let’s say target block time is 2 minutes and we want a steady network.
let’s say the POW with Diff(X) is finding a word so added to the block then the hash of the block begins with X Zeros (for general idea but it can be finer grained than 2^X)

  1. rule 1: any block validated less than 1.5 minutes after its parent is invalid
  2. rule 2: after 1.5 minutes Diff(X) POW is accepted
  3. rule 3: after 4 minutes Diff(X-1) POW is accepted
  4. rule 4: any block competition for one given height needs at least partial consensus resolution and gives possibly feedback information opportunity to adjust difficulty (many solutions proposal early => probably diff is too low for hash-rate, many proposal after 4 minutes => probably not far OK etc.)
  5. rule 5: to sort conflicting propagated block solutions, POW (within adequate propagation time) with the most starting Zeros (then most tailing 1 or whatever deterministic arbitrary secondary game based on hash result) is preferred
  6. rule 6: once chosen the block is timestamped and the difference between nominal average block time (2 minutes) and the measured one from its parent is added to sum_DeltaT’s in the block, also the sum_DeltaErg compared to nominal emission rate (optional number of temporary “uncles/orphans” too)
  7. rule 7: if DeltaT(Validated/legit longuest chain) > Threshold_too_slow(dist_current_height__legit_top) => diminish_diff(DeltaErg)
  8. rule 8: if DeltaT(Validated/legit longuest chain) < Threshold_too_fast(dist_current_height__legit_top) => raise_diff(DeltaErg)
  9. rule 9: the POW diff should be set very slightly over/under depending the ergDelta sign -/+ and calculated to balance over a very long time ~2-3 months or even more

on the miner/pool side the goal will be to find the best possible Diff(X) candidate while storing Diff(X-1) if it finds one solution… during 1.5 minutes minimum… can even stop once one found and happy of the solution, or search for better Diff(Y >= X)

on the node side the tricky thing is certainly to handle/sort “uncles/orphans” when diff is too low… the diff game should help about that… setting a primary necessary goal to validate if alone but also a secondary quality game in case of competition to validate the block… the propagation event will close the current block game somehow.

Also Idk if “uncles/orphans” are kept inside database after the longest chain is judged legitimate else it may be needed to cleanup the mess from DB because it may possibly generate a lot, and that’s the main problem I see.

If it can work within consensus (Idk) then advantages would be:

  • even a large hashrate bump can’t benefit more than +33% indue revenue (not 3x or such that would need being compensated later with crippling the network)… you would hit 1.5 minute/ 2minutes ~all the time… but 1.5 minute is ok, waste is limited not being too fast, and lets room for random fluctuations that can average to nominal on the long run.
    It would let opportunity to detect such scenario too
    And this amount would be smoothed later on in a very diffuse way… but the coin mass delta to compensate will be at least an order of magnitude lower. We don’t care if block time is 2.1 minutes over 2-6 months, but we care if it’s 10 minutes over small period of time that keeps increasing and snowballing with progressive capitulation of ergo supporters while users suffering a terrible experience at the same time.
  • being time constrained, as a miner you want to improve your solution proposal to increase your odds of wining the prize… so miner by himself raises its virtual difficulty to be a better candidate while always keeping its best previous puzzle solution in memory (in case of fallback or minimum valid solution). It keeps at work to improve its revenue. Also quality of solutions to the puzzle put in relation to time to parent is a way to estimate the current hash-rate.
  • also in case of large hash-rate leaving the network there is a fallback with lower difficulty so block time doesn’t inflate exponentially with supporters capitulation (that can snowball) and stays in control.
  • random deviation may be smoothed etc. but block time will fluctuate within a range of decent nominal bandwidth for user experience… overall the network may turn being more efficient compared to blocks that can be generated at any time… in such case possibly near empty and impacting the average tx bandwidth negatively… that’s kind of letting time to fill the pint of beer, a pint with nothing to drink is useless :smiley:
  • also smoothing ergDelta on very long period with slight instantaneous impact gives the frequent hopping hashrate a chance to clean his own mess later on without even realizing it :smiley: or reduce the frequency of on purpose attacks drastically
    long-term supporters are less impacted having to fix a huge mess, and price less likely to fall.
    nominal emission rates will be preserved on the long run, more consistent with less up/downs

I don’t know all the drawbacks, attack vectors etc. but the current problem we are facing will stick to the network and others as well… at best it will cripple the network growth at my opinion… because any excitement+price up will provoke an attack as long there is enough hash-rate ready to turn on… at best
at worse it will keep the prices low enough so an attack is not worth it or until breaking faith of good fellows/devs… sadly Ergo will be one of the primary targets without Eth.

There are other good projects but they are Asic friendly so Ergo is the one that is Asic resistant with best value to small miners eyes.
It will be the 1rst to be attacked if gaining value and showing low relative difficulty…
To grow freely this problem has to be solved so stable supporters are rewarded fairly or at least take the minimal pain to remain happy to support.
And IMHO I think it cannot handle the same way BTC & likes do, the threat not being exactly the same amount, nor volatility, with more agile/diverse hash-rate etc.

that’s it about my own idea, who knows if it can help then I’m happy… anycase I woulda tryed :laughing:

cheers guys

5 Likes

Wow nice post! it must be even more tedious since English isn’t your preferred language. Thanks for taking the time to share your ideas!

I think the difficult part (and unfortunately the part that really matters) isn’t describing a better alternative way to adjust the difficulty. The important part is being able to translate those words into actual code. Code that can be rigorously tested and then possibly implemented securely at the protocol level.

Several good ideas have been described so far in this thread. But ideas that can be simulated/tested (and therefore have a real chance to be implemented), are few and far between.

EIP 37 looks increasingly likely to have the support needed to pass the voting process.

https://votes.sigmaspace.io/

If so, EIP 37 can be expected to be the new method Ergo uses to adjust difficulty. That doesn’t mean this DAA topic is settled for Ergo though.

I really hope alternative DAAs (like the one you describe) are explored further in the coming months and years. I don’t mean to de-legitimize or unfairly dismiss the idea you’ve shared. Rather, I’d like to motivate you to expand on it with further research and work. So that one day, you can hopefully share workable code that we can test or even implement on chain.

Edit: I’m sorry I shouldn’t have replied specifically to you with that last part. Everyone is presenting their idea & yours is just as legit as anybody else’s in the thread. And you’re describing something similar to how I’ve seen Eth’s DAA described…i think.?

3 Likes

I emailed one of the top experts on difficulty adjument algorithms, zawy12 to get his opinion on EIP37 and I want to get visibility on his response without causing FUD, read his comment here:
EIP-37: Tweaking Difficulty Adjustment Algorithm by kushti · Pull Request #79 · ergoplatform/eips · GitHub

Edit: I encourage everyone to read his newest comment:

His tldr; changes noticably over the course of this discussion. I personally sum it up with these two paragraphs:

TL;DR: you develop oscillations, just change difficulty every block with the current math. If you start suffering attacks with timestamp manipulation due to timestamps being allowed too far into the future or due to the 50% cap, read the following.

Your monotonic stamps prevents a lot of problems

And what I believe is a later update:

I saw your difficulty chart and it looks like the new method was implemented 4 days ago and it looks a lot better. It’s possibly very close to being like ASERT, WTEMA, and LWMA, except for the epochs causing a 128 block delay. I believe you could it change difficulty every block and average block time would still be accurate. The reason to do the change is that delaying adjustment is what usually ends up in oscillations. Since it appears you didn’t have catastrophic oscillations with the older method of 1024 blocks per change (as I would have expected even before the merge), I think you’ll be OK with the current method. But if oscillations return, adjusting every block is the fix.

I encourage you to read his full comments, above is short part of EIP37 comments by zawy12.

kushti’s comment + seeing actual results of EIP 37 = what led to zawy12 posting that :point_up: which then, led me to this point where I can honestly say,

While I still think there are better DAA models available, EIP 37 is a big improvement on original least squares method, everyone can look at the Epoch monitor and recent blocks on explorer to see that. I’m happy I’m not losing any sleep over it like I was a few days ago. Edit2: Sorry don’t mean to sound all dramatic, and to be clear my own ignorance is what led me to worrying needlessly :blush: .

Please remember, I’m not great at math or an expert on any of this stuff. Just sharing my thoughts.

4 Likes