Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adds X11K algorithm #48

Open
wants to merge 275 commits into
base: master
Choose a base branch
from
Open

Adds X11K algorithm #48

wants to merge 275 commits into from

Conversation

bedri
Copy link

@bedri bedri commented Nov 16, 2020

This pull request adds X11K algorithm to cpuminer-multi. X11K algorithm is used in Kyanite Coin mining right now.

tpruvot and others added 30 commits August 26, 2014 23:36
should be set to 14 for NEOS-blake and pentablake
also ensure blake context was initialised...
based on https://github.com/ghostlander/cpuminer-neoscrypt

with reduced changes in cpu-miner.c

Signed-off-by: Tanguy Pruvot <tanguy.pruvot@gmail.com>
Fix some existing bugs :
 cryptonight hashrate log and lock when missing stratum diff

colors: enable colored output by default

and also trap signals on windows (Ctrl+C)

Current state: much slower than linux (and x64 almost twice the x86 speed)

Signed-off-by: Tanguy Pruvot <tanguy.pruvot@gmail.com>
tested ok with curl-7.38.0 openssl 1.0.1j

was really not easy to setup, ssl config:

 CROSS_COMPILE="x86_64-w64-mingw32-" ./Configure mingw64 no-asm no-shared
 make && make install

curl config:

 extraopts=--enable-ipv6
 extraflags="-DOPENSSL_NO_ASM -D_THREAD_SAFE"
 openssl=/usr/local/ssl
 CROSS_COMPILE="x86_64-w64-mingw32-" ./configure --enable-shared=no \
 --disable-manual --without-libssh2 --disable-rtsp --disable-ldap \
 --disable-dict --disable-pop3 --disable-ftp --disable-telnet --disable-tftp \
 --disable-smtp --disable-imap --disable-ldaps --disable-gopher --with-zlib \
 --with-ssl=$openssl --with-libssl-prefix=$openssl CPPFLAGS="$extraflags" ${extraopts}

Signed-off-by: Tanguy Pruvot
fix also some remaining aligned attributes for VC++

TODO: support ASM linkage in VC2013 (USE_ASM define)
and update icon branch in README
add also a linux build scrypt
also reduce x11 intensity to output a bit more while benchmarking
and update http headers
version seen: 1024 (Neos), 6 (Mist in PoW phase)
speed increase by 2 (same logic as i made in ccminer)

Signed-off-by: Tanguy Pruvot <tanguy.pruvot@gmail.com>
curl static lib built with HTTP_ONLY define

to build x86 ones, check curl-for-windows project on github
tpruvot and others added 30 commits January 30, 2019 13:56
Signed-off-by: Tanguy Pruvot <tanguy.pruvot@gmail.com>
ARM boards don't build when enabling ASM, and some of them benefit from
using -march=native. Let's have a dedicated build file for this. Also it
takes care of cleaning old autoconf/automake remains that can make the
build fail after pulling updates.
it's only needed on platforms who don't have a CRC32 instruction.
it is counter productive to avoid writing 50% of the times, it
adds conditional jumps which are mispredicted half of the times.
Better use conditional moves and always write. This increase
performance by 6% on ARMv8.
By moving some fields in the structure, we can increase the performance
by an extra 6% on Cortex-A53 at least.
Drop all hashes which will have one of their highest 16 bits set since
they will not match. This saves 4 calls to rf256_one_round() via
rf256_final() and almost doubles the performance.
It's really expensive to use memcpy() to copy 16 kB of data on some
small processors like ARM Cortex A53 which only have 64-bit data paths
from the L1 cache. This roughly consumes 2k cycles just for the copy.
"Perf top" shows that half of the time is spent in memcpy(), and given
that this exhausts the L1 cache, the rest of the operations must cause
a lot of thrashing.

Since there are few modifications applied to the rambox between two
consecutive calls, better keep a history of recent changes inside
the context itself. This doesn't cost much because the write bus
between the CPU and the L1 cache is 128 bit on A53 so we can afford
a few writes. Also, the typical amount of updates apparently is
between 16 and 32 so it makes sense to put an upper bound on 32 and
remain the memory footprint low..

The performance is roughly multiplied by 5 on A53 just by doing this,
the hash rate reaches about 14.4k/s on NanoPI-Neo4, or almost 10 times
the performance of the original code.
and fix modifier for arm64
Array of function pointers optimization

Since you said it's ok, I am merging. It will be merged to linux branch anyways.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.