Job Details |

Ethereum on ARM. New Eth2.0 Raspberry pi 4 image for automatically joining Prylabs Onyx Eth2.0 testnet. Step-by-step guide for installing and activating a validator.

TL;DR: Flash your Raspberry Pi 4, plug in an ethernet cable, connect the SSD disk and power up the device to join the Eth2.0 Onyx testnet.
The image takes care of all the necessary steps to join the Eth2.0 Onyx testnet [1], from setting up the environment and formatting the SSD disk to installing and running the Ethereum Eth1.0 and Eth2.0 clients as well as starting the blockchains synchronization (for both Geth Eth1.0 Goerli [2] and Prysm [3] Eth2.0 Beacon Chain).
You will only need to create a validator account, send the deposit of 32 Goerli ETH to the Onyx contract and start the validator systemd service.


You will need and SSD to run the Ethereum clients (without an SSD drive there’s absolutely no chance of syncing the Ethereum blockchain). There are 2 options:
In both cases, avoid getting low quality SSD disks as it is a key component of you node and it can drastically affect the performance (and sync times). Keep in mind that you need to plug the disk to an USB 3.0 port (in blue).
1.- Download the image:
SHA256 13bc7ac4de6e18093b99213511791b2a24b659727b22a8a8d44f583e73a507cc
2.- Flash the image
Insert the microSD in your Desktop / Laptop and download the file:
Note: If you are not comfortable with command line or if you are running Windows, you can use Etcher [8]
Open a terminal and check your MicroSD device name running:
sudo fdisk -l 
You should see a device named mmcblk0 or sdd. Unzip and flash the image:
unzip sudo dd bs=1M if=ubuntu-20.04-preinstalled-server-arm64+raspi.img of=/dev/mmcblk0 conv=fdatasync status=progress 
3.- Insert de MicroSD into the Raspberry Pi 4. Connect an Ethernet cable and attach the USB SSD disk (make sure you are using a blue port).
4.- Power on the device
The Ubuntu OS will boot up in less than one minute but you will need to wait approximately 7 minutes in order to allow the script to perform the necessary tasks to join the Onyx testnet (it will reboot again)
5.- Log in
You can log in through SSH or using the console (if you have a monitor and keyboard attached)
User: ethereum Password: ethereum 
You will be prompted to change the password on first login, so you will need to log in twice.
6.- Forward 30303 and 13000 ports in your router (both UDP and TCP). If you don’t know how to do this, google “port forwarding” followed by your router model.
7.- Getting console output
You can see what’s happening in the background by typing:
sudo tail -f /valog/syslog 
7.- Grafana Dashboards
There are 2 Grafana dashboards to monitor the node (see section “Grafana Dashboards below”.
See [9]

The Onyx Eth2.0 testnet

Onyx is an Eth2.0 testnet created by Prylabs according to the latest official specification for Eth2.0, the v0.12.1 [10] release (which is aimed to be the final).
In order to run an Onyx Eth 2.0 node you will need 3 components:
The image takes care of the Eth1.0 Geth and Eth2.0 Beacon Chain configurations and syncs. So, once flashed (and after a first reboot), Geth (Eth1.0 client) starts syncing the Goerli testnet and the Beacon Chain (Eth2.0 client) gets activated through the Prysm client, both as systemd services.
When the Goerli testnet sync is completed, the Beacon Chain starts syncing. Both chains are necessary as the validator needs to communicate with them (as explained below).
Activating the validator
Once Goerli and the Beacon chain are in sync you have just one task left, configure the Validator for enabling the staking process.
The image provides the Prysm validator client for running the staking process. With this validator, you will create an account with 2 keys (public and private) and get an HEX string that needs to be sent to the Eth 1.0 blockchain as data through a 32 ETH transaction.
The Beacon Chain (which is connected to the Eth1 chain) will detect this deposit (which includes the validator public key) and the Validator will be activated.
So, let’s get started. Geth Goerli testnet and the Beacon Chain are already syncing in the background. Goerli will sync in about 1 hour and the Beacon Chain in about 2 hours (so this will take 3 hours overall).
The easiest way to enable a Prysm validator is to use the Prylabs web portal to get Goerli ETH (testnet ETH) and follow their instructions:
Let’s break this down:
Step 1) Get Prysm
Nothing to do here. Prysm is already installed.
Step 2) Get GöETH — Test ETH
We need 32 ETH to stake (it is fake ETH as this is a tesnet). Prylabs created a faucet with a great UI so you can easily get 32.5 Goerli ETH.
You will need a web3 provider to use the faucet. Install Metamask browser extension (if you don’t have it running yet). Create an account and set the network to “Goerli test network” (on the top of the Metamask screen). Now, click once in “Metamask” and then click “Need GoETH?” button. Confirm the transaction.
Once funded, you will see something like this:
You are 0x0b2eFdbFB8EcaF7F4eCF6853cbd5eaD86510d63C and you have 32.5 GöETH. 
Step 3). Generate a validator public / private key
Go to your Raspberry Pi console and run the following command (make sure you are logged in with your ethereum user):
validator accounts create 
Press return to confirm the default path
Enter a password twice (you will need it later to run the validator so write it down and be careful). Once finished, your account will be created (under the /home/ethereum/.eth2validators directory) containing, among other info, your validator keys. Additionally you will get the deposit data as follows (this is an example):
========================Deposit Data======================= 0x22895118000000000000000000000000000000000000000000000000000000000000008000000000000000000000000000000000000000000000000000000000000000e000000000000000000000000000000000000000000000000000000000000001202f06da05b7e399e151f05d910369779ddd5c4c577ed264fd17040a9931b5adf10000000000000000000000000000000000000000000000000000000000000030affc980d9b2c86d1fb1fa70fd95c56dae34efcaa7bf923e020ac8941519065ff70b6b5ba6644e654ba372473b6b5837100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000002000a494d8e641d82ea723bc2f83b40bfd7f752ff7143cf16e57ad6627e99f0e590000000000000000000000000000000000000000000000000000000000000060b69dd0e51e68ddf8b2f5ecbdb8112b23b46dc8c7c7a68185652884b162b8000464847308b165a33aa102a00199e9c0800f53c768376fd88a3ba5f11e6d2eb3b5f6a455b97b4abe953faa270ca6e187db9739e047bf6fd51e02ab49b4ba17d376 =================================================================== ***Enter the above deposit data into step 3 on*** 
Copy this data (just the hexadecimal part, from 0x to the last number), go back to step 3 of Prylabs website and paste it into the field “Your validator deposit data”.
Step 4) Start your beacon chain & validator clients
Beacon chain is already running in the background so let’s configure the validator. Just edit the /etc/ethereum/prysm-validator.conf file and replace “changeme” string with your password (you can use nano or vim editors). Now run:
sudo systemctl enable prysm-validator && sudo systemctl start prysm-validator 
Check if everything went right by running:
sudo systemctl status prysm-validator 
Step 5) Send a validator deposit
We are almost there. Just click the “Make deposit” button and confirm the transaction.
Now you need to wait for the validator to get activated. In time, the beacon chain will detect the 32 ETH deposit (which contains the validator public key) and the system will put your validator in queue. These are the validator status that you will see during the activation process:

Grafana Dashboards

We configured 2 Grafana Dashboards to let users monitor both Eth1.0 and Eth2.0 progress. To access the dashboards just open your browser and type your Raspberry IP followed by the 3000 port:
http://replace_with_your_IP:3000 user: admin passwd: ethereum 
There are 3 dashboards available:
Lot of info here. You can see for example if Geth is in sync by checking (in the Blockchain section) if Headers, Receipts and Blocks are aligned or easily find the validator status.

Whats's next

We are planning a new release for a multi testnet Eth2.0 network including Prysm, Teku and Lighthouse client (and hopefully Nimbus).

Gitcoin Grant

Gitcoin Grants round 6 is on!. If you appreciate our work, please consider donating. Even $1 can make the difference!
Follow us on Twitter. We post regular updates and info you may be interested in!


    1. Installation script:
    1. Prysm Dashboard:
submitted by diglos76 to ethereum [link] [comments]

Proxmox containers not running after apt upgrade

I recently performed an apt upgrade and my lxc containers stopped working. When starting a container, no error message appears and the web UI responds with "Task OK" but the container doesn't actually start
I tried pct start 100 also, and no error message was displayed, but trying to pct enter 100 returns Error: container '100' not running!
Not entirely sure which package caused it, bu this this is the apt/history.log
# tail /valog/apt/history.log Start-Date: 2020-07-11 10:24:37 Commandline: apt upgrade Install: pve-headers-5.4.44-2-pve:amd64 (5.4.44-2, automatic), proxmox-backup-client:amd64 (0.8.6-1, automatic), pve-kernel-5.4.44-2-pve:amd64 (5.4.44-2, automatic) Upgrade: proxmox-widget-toolkit:amd64 (2.2-8, 2.2-9), pve-kernel-5.4:amd64 (6.2-3, 6.2-4), corosync:amd64 (3.0.3-pve1, 3.0.4-pve1), libavformat58:amd64 (7:4.1.4-1~deb10u1, 7:4.1.6-1~deb10u1), libcmap4:amd64 (3.0.3-pve1, 3.0.4-pve1), libavfilter7:amd64 (7:4.1.4-1~deb10u1, 7:4.1.6-1~deb10u1), libpve-access-control:amd64 (6.1-1, 6.1-2), libpve-storage-perl:amd64 (6.1-8, 6.2-3), libswresample3:amd64 (7:4.1.4-1~deb10u1, 7:4.1.6-1~deb10u1), libquorum5:amd64 (3.0.3-pve1, 3.0.4-pve1), pve-qemu-kvm:amd64 (5.0.0-4, 5.0.0-9), libmagickwand-6.q16-6:amd64 (8:, 8:, pve-container:amd64 (3.1-8, 3.1-10), libpostproc55:amd64 (7:4.1.4-1~deb10u1, 7:4.1.6-1~deb10u1), pve-manager:amd64 (6.2-6, 6.2-9), libvotequorum8:amd64 (3.0.3-pve1, 3.0.4-pve1), libpve-guest-common-perl:amd64 (3.0-10, 3.0-11), libavcodec58:amd64 (7:4.1.4-1~deb10u1, 7:4.1.6-1~deb10u1), libpve-common-perl:amd64 (6.1-3, 6.1-5), libavutil56:amd64 (7:4.1.4-1~deb10u1, 7:4.1.6-1~deb10u1), qemu-server:amd64 (6.2-3, 6.2-8), libcfg7:amd64 (3.0.3-pve1, 3.0.4-pve1), libproxmox-backup-qemu0:amd64 (0.1.6-1, 0.6.1-1), libswscale5:amd64 (7:4.1.4-1~deb10u1, 7:4.1.6-1~deb10u1), libknet1:amd64 (1.15-pve1, 1.16-pve1), libmagickcore-6.q16-6:amd64 (8:, 8:, pve-headers-5.4:amd64 (6.2-3, 6.2-4), pve-kernel-helper:amd64 (6.2-3, 6.2-4), libpve-http-server-perl:amd64 (3.0-5, 3.0-6), libcpg4:amd64 (3.0.3-pve1, 3.0.4-pve1), libcorosync-common4:amd64 (3.0.3-pve1, 3.0.4-pve1), imagemagick-6-common:amd64 (8:, 8: End-Date: 2020-07-11 10:26:03 
I tried lxc-start with logs instead, and got these messages:
# lxc-start -n 100 -F -l DEBUG -o /tmp/lxc-100.log lxc-start: 100: lsm/apparmor.c: run_apparmor_parser: 892 Failed to run apparmor_parser on "/valib/lxc/100/apparmolxc-100_<-var-lib-lxc>": apparmor_parser: Unable to replace "lxc-100_". Profile doesn't conform to protocol lxc-start: 100: lsm/apparmor.c: apparmor_prepare: 1064 Failed to load generated AppArmor profile lxc-start: 100: start.c: lxc_init: 845 Failed to initialize LSM lxc-start: 100: start.c: __lxc_start: 1903 Failed to initialize container "100" lxc-start: 100: tools/lxc_start.c: main: 308 The container failed to start lxc-start: 100: tools/lxc_start.c: main: 314 Additional information can be obtained by setting the --logfile and --logpriority options # tail /tmp/lxc-100.log lxc-start 100 20200712012140.203 ERROR start - start.c:lxc_init:845 - Failed to initialize LSM lxc-start 100 20200712012140.203 ERROR start - start.c:__lxc_start:1903 - Failed to initialize container "100" lxc-start 100 20200712012140.203 DEBUG conf - conf.c:idmaptool_on_path_and_privileged:2642 - The binary "/usbin/newuidmap" does have the setuid bit set lxc-start 100 20200712012140.203 DEBUG conf - conf.c:idmaptool_on_path_and_privileged:2642 - The binary "/usbin/newgidmap" does have the setuid bit set lxc-start 100 20200712012140.203 DEBUG conf - conf.c:lxc_map_ids:2710 - Functional newuidmap and newgidmap binary found lxc-start 100 20200712012140.208 NOTICE utils - utils.c:lxc_setgroups:1366 - Dropped additional groups lxc-start 100 20200712012140.208 INFO conf - conf.c:run_script_argv:340 - Executing script "/usshare/lxc/hooks/lxc-pve-poststop-hook" for container "100", config section "lxc" lxc-start 100 20200712012140.893 INFO conf - conf.c:run_script_argv:340 - Executing script "/usshare/lxcfs/lxc.reboot.hook" for container "100", config section "lxc" lxc-start 100 20200712012141.395 ERROR lxc_start - tools/lxc_start.c:main:308 - The container failed to start lxc-start 100 20200712012141.395 ERROR lxc_start - tools/lxc_start.c:main:314 - Additional information can be obtained by setting the --logfile and --logpriority options 
Trying to access the apparmor directory shows that it doesn't exist, could the upgrade have deleted the directory?
# ls /valib/lxc/100/apparmor ls: cannot access '/valib/lxc/100/apparmor': No such file or directory # ls -l /valib/lxc/100/ total 8 -rw-r--r-- 1 root root 977 Jul 12 09:21 config drwxr-xr-x 2 root root 4096 Jun 15 2019 rootfs 
My filesystem is ext4, many issues I found regarding upgrade failures involves zfs but I don't use zfs
I'm not familiar enough with apparmor to go any deeper and also not entirely sure how to use tools/lxc_start.c directly with the --logfile/--logpriority options either, not sure what other logs/config files would be helpful in finding the issue, but here are a few more:
# pct config 100 arch: amd64 cores: 2 hostname: apache memory: 512 nameserver: net0: name=eth0,bridge=vmbr0,gw=,hwaddr=82:B1:0D:3C:47:68,ip=,ip6=dhcp,type=veth onboot: 1 ostype: ubuntu parent: upgrade rootfs: local-lvm:vm-100-disk-0,size=20G startup: order=1,up=30 swap: 1024 unprivileged: 1 # systemctl status [email protected][email protected] - PVE LXC Container: 100 Loaded: loaded (/lib/systemd/system/[email protected]; static; vendor preset: enabled) Active: failed (Result: exit-code) since Sun 2020-07-12 09:27:47 +08; 16min ago Docs: man:lxc-start man:lxc man:pct Process: 30827 ExecStart=/usbin/lxc-start -F -n 100 (code=exited, status=1/FAILURE) Main PID: 30827 (code=exited, status=1/FAILURE) Jul 12 09:27:44 alpha systemd[1]: Started PVE LXC Container: 100. Jul 12 09:27:47 alpha systemd[1]: [email protected]: Main process exited, code=exited, status=1/FAILURE Jul 12 09:27:47 alpha systemd[1]: [email protected]: Failed with result 'exit-code'. # journalctl -xe -- The job identifier is 100128. Jul 12 09:50:16 alpha systemd[1]: Started PVE LXC Container: 100. -- Subject: A start job for unit [email protected] has finished successfully -- Defined-By: systemd -- Support: -- -- A start job for unit [email protected] has finished successfully. -- -- The job identifier is 100210. Jul 12 09:50:16 alpha kernel: EXT4-fs (dm-13): mounted filesystem with ordered data mode. Opts: (null) Jul 12 09:50:17 alpha audit[1534]: AVC apparmor="STATUS" info="failed to unpack end of profile" error=-71 profile="unconfined" name="lxc-100_" pid=1534 comm="apparmor_parser" name="lxc-100_" offset=151 Jul 12 09:50:17 alpha kernel: audit: type=1400 audit(1594518617.147:54): apparmor="STATUS" info="failed to unpack end of profile" error=-71 profile="unconfined" name="lxc-100_" pid=1534 comm="apparmor_parser" name="lxc-100_" offset=151 Jul 12 09:50:18 alpha systemd[1]: [email protected]: Main process exited, code=exited, status=1/FAILURE -- Subject: Unit process exited -- Defined-By: systemd -- Support: -- -- An ExecStart= process belonging to unit [email protected] has exited. -- -- The process' exit code is 'exited' and its exit status is 1. Jul 12 09:50:18 alpha systemd[1]: [email protected]: Failed with result 'exit-code'. -- Subject: Unit failed -- Defined-By: systemd -- Support: -- -- The unit [email protected] has entered the 'failed' state with result 'exit-code'. 
submitted by NoOneLiv3 to Proxmox [link] [comments]

[Guide] Homebridge UniFi Cloudkey v1 (07/2020)

A small preface, after allot of trial&error i’m finally managed to install Homebridge + Config UI X on a UniFi Cloudkey V1. I have spend many hours testing to have Homebridge running correctly. First i followed a guide from Ro3lie’s and this was partly successful, but the NodeJS was version 10.x and the serviced (running Homebridge as a service...) was not working. First NodeJS 10.x is not ideal for some Homebridge plugins (needs NodeJS 12.x), also Homebridge was not running as a service so if you restart the Cloudkey or you have a network issue you have to manually start the service with ssh. I have used Putty for the SSH connection and WinSCP to change some files, because i have/had almost no knowledge from NodeJS, Coding skills, etc so i have used the combo of SSH and WinSCP.
This guide will install the following
Update Cloudkey Firmware and reset to factory defaults:
Uninstalling the UniFi Controller:
Changing the .list files:
Deb is used to indicate that we want to download indexes of binary packages , we also need to change and delete some files. For this part i used WinSCP (SFTP Client) but if you have some more skills you can also do it from your SSH connection. If you want to do it with SSH find the info in Ro3lie’s guide.
deb buster main contrib non-free deb-src buster main contrib non-free deb busteupdates main contrib non-free deb-src busteupdates main contrib non- deb buster-updates main contrib non-free deb-src buster-updates main contrib non-free
Go to /etc/apt/sources.list.d/ and you find 3 files here, delete security.list and ubnt-unifi.list. Change the name of nodejs.list to nodesource.list. Open the file and again delete all the text inside and paste the following and save the file:
deb stretch main deb-src stretch main
Now run the following commands (from SSH connection...) and after it’s done reboot the Cloudkey (run the command reboot from your ssh connection...)
sudo apt-get update sudo apt-get clean && sudo apt-get clean all && sudo apt-get autoclean && sudo apt-get update
Update Debian OS:
We first need to update to the newer Debian Buster 10.x, at this moment the Cloudkey is running Debian Jessie 8.x. Run command sudo apt-get update && sudo apt-get upgrade During the upgrade you may be asked what to do with the unattended-upgrades configration file, Choose to ‘Keep the local version currently installed’. When everything is done we need to delete some files we no longer use. Run the following commands:
rm /etc/apt/apt.conf.d/50unattended-upgrades.ucf-dist sudo apt-get remove freeradius sudo apt-get purge freeradius
Update NodeJS 6.x to 12.x:
sudo apt update sudo apt -y install curl dirmngr apt- transport-https lsb-release ca-certificatescurl -sL | sudo -E bash -sudo apt -y install nodejs
To test if you have successful installed NodeJS 12.x and NPM 6.x.x run the commands node -v and npm -v
Install Homebridge + Config UI X and setup Homebridge as a service:
sudo npm install -g —unsafe-perm homebridge homebridge- config-ui-x
sudo hb-service install —user homebridge
At this point all the availble files for Homebridge and the service are installed. Normally Homebridge will now be running as a service but for some reason it doesn’t’ so we have to do some changes to make everything work. Use WinSCP and navigate to the following file /etc/systemd/system/homebridge.service delete all available text and paste the following and save.
[Unit] Description=Node.js HomeKit Server [Service] Type=simple User=homebridge EnvironmentFile=/etc/default/homebridge

Adapt this to your specific setup (could be /usbin/homebridge) # See comments below for more information ExecStart=/usbin/homebridge $HOMEBRIDGE_OPTS

Restart=on-failure RestartSec=10 KillMode=process [Install]
Now do the same for /etc/default/homebridge, also delete the text and past the following
# Defaults / Configuration options for homebridge

The following settings tells homebridge where to find the config.json HOMEBRIDGE_OPTS=-U /valib/homebridge -I

If you uncomment the following line, homebridge will log more

You can display this via systemd's journalctl: journalctl -f -u homebridge # DEBUG=*

To enable web terminals via homebridge-config-ui-x uncomment the following li HOMEBRIDGE_CONFIG_UI_TERMINAL=1

We need to make some (user right changes and move the .service file to the valib folder, a few of this commands are not needed and will throw some errors, just ignore that and just run them all.)
sudo mkdir /valib/homebridge sudo useradd —s—tem homebridge sudo chown -R homebridge:homebridge /valib/homebridge sudo chmod 777 -R /valib/homebridge sudo cp .homebridge/config.json /valib/homebridge/config.json
Start Homebridge as a service (run the following commands):
systemctl daemon-reload systemctl enable homebridge systemctl start homebridge
Homebridge is now running as a service and you can login to the UI-X using your Cloudkey’s localipaddress:8581. If you have a backup from another system you can just restore it at this point, after the restore is done just don’t do anything and follow the next steps...
Homebridge SUDO rights using Visudo:
The last part is very important, we have to give the user Homebridge sudo rights. If you don’t do this last part correctly you can not update homebridge, install packages or use the log viewer in the UI-X because Homebridge don’t have the correct rights. We going to use VI, a safe and secure text editor.
Thats it! If you done everything correcly you have now a working Homebridge with UI-X running as a service on your UniFi Cloudkey! Note if someone reads this guide and think there are some changes needed please let me know. Special thanks to Ro3lie for his original guide and Jeroen Van Dijk for the great support with Visudo! You can find both original guides where i get inspired for this tutorial here and here.
submitted by AverageUser1337 to homebridge [link] [comments]

Weekly Dev Update 23/03/2020

Hey Y’all,
Last week we released the 7.0.x Valiant Vidar binaries, which represent much of our work from the last three months. Valiant Vidar will add features like support for full onion routing in Session (instead of the current proxy requests system), sweeping Lokinet changes involving a complete rebuild of the DHT for increased reliability at scale, and the release of LNS for use in Session and the Loki wallets.
Loki Core
If you’re on our Discord you can catch Jeff, the lead developer of LLARP, live streaming as he codes at He typically streams on Tuesday mornings, 9am - 12pm Eastern (US) time.
What went on last week with Lokinet: Lokinet on mainnet continues to look good; as of time of writing, there are 853 Service Node Lokinet routers online.
This has been a light week for Lokinet 0.8 development: One of the Lokinet developers took much of the week off for some much-needed rest and relaxation. Meanwhile, we’ve started work on a serious revamp of the configuration files; the current approach is difficult to use (even for Lokinet devs!) both in the code and in terms of describing all available options. By rewriting it, we’ll make it better documented, machine-editable (so that we can write tools to better generate update configuration sections), and we’ll also significantly simplify how options are handled in the code. We also started building Debian/Ubuntu packages for the graphical Lokinet control panel (, along with some necessary changes for this Debian package to interact with the system-managed Lokinet service.
PR Activity:
Session iOS
Session Android
Session Desktop
Loki Storage Server
submitted by Keejef to LokiProject [link] [comments]

Full container/vm documentation available (unreleased but copied/pasted here)

Sourced from here but copied/pasted here for ease of view. It's not merged yet but we will be able to find it here when finished.

Running Custom Containers Under Chrome OS

Welcome to the containers project where we support running arbitrary code inside
of VMs in Chrome OS.
This is a heavily-technical document, but more user-friendly information will be
coming in the future.
We won't get into technical details for specific projects as each one already
has relevant documentation.
We instead will link to them for further reading.


There are many codenames and technologies involved in this project, so hopefully
we can demystify things here.
Crostini is the umbrella term for making Linux application support easy to use
and integrating well with Chrome OS.
It largely focuses on getting you a Terminal with a container with easy access
to installing whatever developer-focused tools you might want.
It's the default first-party experience.
The Terminal app is the first entry point to that environment.
It's basically just crosh.
It takes care of kicking off everything else in the system that you'll interact
crosvm is a custom virtual machine monitor that takes care of managing KVM,
the guest VM, and facilitating the low-level (virtio-based) communication.
Termina is a VM image with a stripped-down Chrome OS linux kernel and
userland tools.
Its only goal is to boot up as quickly as possible and start running containers.
Many of the programs/tools are custom here.
In hindsight, we might not have named it one letter off from "Terminal", but so
it goes.
Maitred is our init and service/container manager inside of the VM, and is
responsible for communicating with concierge (which runs outside of the VM).
Concierge sends it requests and Maitred is responsible for carrying those
Garcon runs inside the container and provides integration with
Concierge/Chrome for more convenient/natural behavior.
For example, if the container wants to open a URL, Garcon takes care of
plumbing that request back out.
Sommelier is a Wayland proxy compositor that runs inside the container.
Sommelier provides seamless forwarding of contents, input events, clipboard
data, etc... between applications inside the container and Chrome.
Chrome does not run an X server or otherwise support the X protocol; it only
supports Wayland clients.
So Sommelier is also responsible for translating the X protocol inside the
container into the Wayland protocol that Chrome can understand.
You can launch crosh and use the vmc command to create new VMs manually.
It will only run Termina at this point in time.
You can use [vsh] to connect to a VM instance and use LXC to run


Here's a quick run down of how to get started.
If you're interested in Android Studio, check out their documentation.

Runtime Features

OK, so you've got your container going, but what exactly can you expect to work?

Missing Features

There's a lot of low-hanging fruit we're working on fleshing out.
There are more things we're thinking about, but we're being very
careful/cautious in rolling out features as we want to make sure we aren't
compromising overall system security in the process.
The (large) FAQ below should hopefully hit a lot of those topics.


While running arbitrary code is normally a security risk, we believe we've come
up with a runtime model that addresses this.
The VM is our security boundary, so everything inside of the VM is
considered untrusted.
Our current VM guest image is also running our hardened kernel to further
improve the security of the containers, but we consider this a nice feature
rather than relying on it for overall system security.
In this model, the rest of the Chrome OS system should remain protected from
arbitrary code (malicious or accidental) that runs inside of the containers
inside of the VM.
The only contact with the outside world is via crosvm, and each channel
talks to individual processes (each of which are heavily sandboxed).

User Data In The Container

With the shift to cloud services, current security thinking highlights the fact
that getting account credentials (e.g. your Google/Facebook passwords) is way
more interesting than attacking your desktop/laptop.
They are not wrong.
The current VM/container Chrome OS solution does not currently improve on
Put plainly, anything entered into the container is the responsibility of the
user currently.
So if you run an insecure/compromised container, and then type your passwords
into the container, they can be stolen even while the rest of the Chrome OS
system remains secure.


Linux apps do not survive logout (since they live in the user's encrypted
They also do not automatically start at login (to avoid persistent attacks),
nor can they automatically run at boot (without a login session) since they
wouldn't be accessible (they're in the user's encrypted storage).


Once you've got the Terminal installed (which takes care of installing all
the other necessary components like Termina), the system is ready to use.
By virtue of having things installed, nothing starts running right away.
In that regard, when you log out, everything is shutdown and killed, and when
you login, nothing is automatically restarted.
When you run the Terminal, the Termina will be started automatically,
and the default Crostini container will be started in that.
You can now connect to the container via SSH or SFTP (via the Files app).
Similarly, if you run a Linux application diretly (e.g. pinned to your shelf
or via the launcher), the Termina will be started automatically, and
the container that application belongs to will be launched.
There's no need to run Terminal manually in these situations.
When you close all visible appliations, the VM/containers are not shutdown.
If you want to manually stop them, you an do so via crosh and the vmc
Similarly, if you want to spawn independent VMs, or more containers, you can
do so via crosh and the vmc and vsh commands.

Device Support

While we would like to be able to bring this work to all Chromebooks, the kernel
and hardware features required limit where we can deploy this.
A lot of features we use had to be backported, and the further back we go, the
more difficult & risky it is to do so.
We don't want to compromise system stability and security here.

Supported Now

The initial platform is the Google Pixelbook (eve) running an Intel processor
(x86_64) with Linux 4.4.

Hardware Requirements

We are not planning on requiring a minimum amount of RAM, storage, or CPU speed,
but certainly the more you have of each of these, the better off things will
You will need a CPU that has hardware virtualization support.



Where can I chat with developers?

All Chromium OS development discussions happen in our
chromium-os-dev Google Group.
Feel free to ask anything!

Where can I file feature requests?

As a nascent project, we've got a lot on our plate and planning on releasing,
so it'd be nice to hold off for now and check back in after a few Chrome OS
Feel free to chat/ask on the mailing list above in the meantime.
Once we are in a more stable place, you can use our issue tracker.
See the next question for details.

Where can I file bugs?

Please first make sure you're using the latest dev channel.
A lot of work is still ongoing.
Next, please make sure the issue isn't already known or fixed.
You can check the existing bug list.
If you still want to send feedback, you can [file a feedback
report]feedback-report and include #crostini in the description.
Feedback about any part of Chrome OS can be filed with "Alt-Shift-i".
If you still want to file a bug with the developers, use this link to
route to the right people.

Can I boot another OS like Windows, macOS, Linux, *BSD, etc...?

Currently, no, you can only boot our custom Linux VM named Termina.
See also the next few questions.

Can I run my own VM/kernel?

Currently, no, you can only boot Termina which uses our custom Linux kernel
and configs.
Stay tuned!

Can I run a different Linux distro?

Of course!
The full LXD command line is available, and the included images remote has lots
of other distros to choose from.
However, we don't test with anything other than the default container that we
ship, so things may be broken when running another distro.

I'm running , how do I get {gui apps, launcher icons, etc...}?

Sommelier and Garcon binaries are bind-mounted into every container, so no
need to install or cross-compile.
The systemd units and config files from cros-container-guest-tools will start
these daemons in a systemd user session.
It's also a good idea to run loginctl enable-linger to allow these to
remain running in the background.

Am I running Crostini?

If you're using the Terminal app, or programs in the default container we
provide that includes our programs to ease integration (e.g. Sommelier), then
If you're running your own container or VM, then no.

How do I share files between Chrome OS & the container?

Using Secure Shell, you can set up a SFTP mount to the remote container and
then browse via the Files app.
Work is on going to automate this step by default.

Can I access files when the container isn't running?

Currently, the container must be running in order to access its content.

Can I install custom kernel modules?

Currently, no, Termina does not include module support.
That means trying to use software that requires building or loading custom
kernel modules (e.g. VirtualBox) will not work.
See the next question too.

Can I mount filesystems?

Currently, no (*).
The containers are implemented using Linux user namespaces and those are quite
restricted (by design).
We're looking into supporting FUSE though.
(*): Technically you can mount a few limited pseudo filesystems (like
memory-backed tmpfs), but most people aren't interested in those.

Can I run a VM inside the VM?

Currently, no, nested KVM is not supported.
You could run qemu-system to emulate the hardware and boot whatever OS you want
inside of that.
Unfortunately, it'll be quite slow as QEMU won't be able to utilize KVM for
hardware acceleration.

Can I run a container inside the container?

You'll probably need to install the relevant packages first for whatever
container format you want to run.

What container formats are supported?

Termina currently only supports LXC directly.
We're aware of Kubernetes/DockeOCI/rkt/etc... and hope to make them all easy
to use.
See the previous question for a workaround in the mean time.

What architecture works on my system?

Since everything is all native code execution, it depends on the device you
If you don't know what device you have, you can find this out in two different
If you see x86_64, you'll be able to run code compiled for Intel/AMD
(32-bit/64-bit/x32 should all work).
If you see arm (or something similar like armv7l) or aarch64, you'll be
able to run code compiled for ARM/ARM64.

Can I run other architectures?

There is currently no integrated support for running e.g. ARM code on an Intel
system, or vice-versa.
You could handle this yourself (e.g. by using qemu-user), but if you're familiar
with qemu-user, then you already knew that :).

How many VMs can I run?

You can spawn as many as your system can handle (RAM/CPU-wise).
They are all independent of each other.

How many containers can I run?

You can spawn as many as your system can handle (RAM/CPU-wise).
Each VM instance can host multiple containers.

Can I run programs that keep running after logout?

All VMs (and their containers) are tied to your login session.
As soon as you log out, all programs are shutdown/killed by design.
Since all your data lives in your encrypted home, we wouldn't want that to
possibly leak when you logout.
For more details, see the Security section in this doc.

Can I autorun programs when I login?

All VMs (and their containers) need to be manually relaunched.
This helps prevent persistent exploits.
For more details, see the Security section in this doc.

Can I autorun programs when I boot?

See the previous questions, and the Security section.

Are my VMs/containers/data synced/backed up?

Currently, no, nothing is synced or backed up.
You're responsible for any data going into the containers.
We hope to improve this situation greatly.

Can I use IPv6?

Unfortunately, only IPv4 is currently supported.
Yes, we're fully aware that everything should be IPv6-compatible in 2018.
We're working on it.

Can I access layer 2 networking?

Currently, no, networking access is only at layer 3 (i.e. IP).
So you won't be able to do any bridging or lower level fun stuff.
It's not clear if/when this will change.
Bridging with the outside world is difficult with WiFi, and not many devices
have Ethernet connections.
We could support layer 2 between containers, but it's not clear how many people
want this in order to justify the effort involved.

Can I access hardware (e.g. USB/Bluetooth/serial)?

Currently, no, but we are working on it.
Stay tuned!

Can I run graphical applications?

Yes, but currently things are unaccelerated.
So if you're looking to play the latest Quake game, it's not going to work well.
See the next few questions.

Can I run Wayland programs?

Yes, and in fact, these are preferred!
Chrome itself deals with Wayland clients heavily, and so you're much more
likely to have things "just work" if you upgrade.

Can I run X programs?

Yes, via our Sommelier helper.
We're still working out some compatibility kinks, and it probably will never be
as perfect as running an X server, but with the community moving to Wayland,
it should be good enough.

Why are windows sometimes tiny/fuzzy?

While Chrome supports high DPI displays, many Linux applications don't.
When a program doesn't properly support DPI scaling, poor results follow.
Currently we expose the native resolution and DPI directly to applications.
If they show up tiny or fuzzy, it's because they don't support scaling properly.
You should report these issues to the respective upstream projects so that,
hopefully someday, it'll "just work".
In the mean time, Sommelier exposes some runtime settings so you can set the
scale factor on a per-program basis to workaround the misbehavior.
Check out Sommelier's documentation for more details.
If you're applying a system wide zoom or otherwise changing the default display
resolution, we attempt to scale the application output to match.
This can lead to blurry results.
You can adjust the resolution of your display, or tweak things via Sommelier
(see above for more details).

Can I run Windows programs?

Sure, give WINE a try.
Compatibility will largely depend on WINE though, so please don't ask us for

Can I run Steam?

Sure, give Steam a shot.
Just remember that without accelerated graphics or sound, it's probably not
going to be too much fun.

Can I run macOS programs?

Probably not.
You could try various existing Linux solutions, but chances are good that they
are even rougher around the edges.

Can I develop Android apps (for ARC++)?

Check out the Android Studio site for more details on this.

Why implement crosvm from scratch (instead of using QEMU/kvmtool/etc...)?

We have nothing against any of these other projects.
In fact, they're all pretty great, and their designs influenced ours.
Most significantly, they did more than we needed and did not have as good a
security model as we were able to attain by writing our own.
While crosvm cannot do everything those other projects can, it does only what
we need it to.
For more details, check out the crosvm project.

Why run VMs? Aren't containers secure?

While containers often isolate themselves (via Linux namespaces), they do not
isolate the kernel or similar system resources.
That means it only takes a single bug in the kernel to fully exploit the system
and steal your data.
That isn't good enough for Chrome OS, hence we put everything inside a VM.
Now you have to exploit crosvm via its limited interactions with the guest,
and crosvm itself is heavily sandboxed.
For more details, see the Security section in this doc.

Don't Android apps (ARC++) run in a container and not a VM?

Unfortunately, yes, Android apps currently run only in a container.
We try to isolate them quite a bit (using namespaces, seccomp,
alt syscall, SELinux, etc...), but at the end of the day, they have direct
access to many syscalls and kernel interfaces, so a bug in there is reachable
via code compiled with Android's NDK.

If Android apps are in a container, why can't users run code too?

We don't usually accept a low security bar in one place as a valid reason to
lower the security bar everywhere.
Instead, we want to constantly raise the security bar for all code.

Are Android apps (ARC++) going away?

There are no plans to merge the two projects.
We share/re-use a lot of the Chrome bridge code though, so it's not like we're
doing everything from scratch.

Don't VMs slow everything down?

It is certainly true that VMs add overhead when compared to running in only
a container or directly in the system.
However, in our tests, the overhead is negligble to the user experience, and
well worth the strong gains in system security.
For more details, see the Security section in this doc.

Why run containers inside the VM? Why not run programs directly in the VM?

In order to keep VM startup times low, we need Termina to be as slim as
That means cutting out programs/files we don't need or are about.
We use SquashFS to make the image smaller and faster to load, but it means
the image/root filesystem is always read-only.
Further, the versions of programs/libraries we ship are frequently newer than
other distros (since we build off of Gentoo), and are compiled with extra
security flags.
It would also make it more difficult to have a stateless image that always
worked and would be immune from user mistakes.
Altogether, it's difficult to support running arbitrary programs, and ends
up being undesirable.
Forcing everything into a container produces a more robust solution, and
allows users to freely experiment without worry.
Also, we love turtles.

Can I disable these features?

Administrators can control access to containers/VMs via the management
console, so enterprise/education organizations that want to limit this can.
Initially there is a "Linux (Beta)" option under the standard Chrome OS
settings, but the long-term plan is to remove this knob so things work
At which point, there will be no knob for unmanaged devices.
submitted by -nbsp- to Crostini [link] [comments]

Creating a Headless Staking Node on Ubuntu 18.04

Creating a Headless Staking Node on Ubuntu 18.04
##UPDATE## Step 8 - Option 2, has some bugs in the final build process. i haven't had time to work them out yet!

This guide will take you through building and running a headless x42 Full Node! The OS I am using here is Ubuntu 18.04, this guide picks up from a complete/fresh ubuntu install.
This is meant to setup a staking node and so this guide will run you through building, configuring and setting up staking. It will not cover sending transactions or anything else.
The things we are going to do:
  • Step 1 - Install .net core
  • Step 2 - Download The x42 Node Source & Compile It
  • Step 3 - Setting The x42 Node Up To Run On Boot
  • Step 4 - Setup A New Wallet
  • Step 5 - Configure The x42 Daemon
  • Step 6 - Get Address
  • Step 7 - Check Balance
  • Step 8 - Connect The UI Wallet To A Headless Node
  • Step 8 - [Option 1 - Use Installer] Connect The UI Wallet To A Headless Node
  • Step 8 - [Option 2 - Build/Compile UI Only] Connect The UI Wallet To A Headless Node # BROKEN#

Step 1 - Install .net Core

Here is the reference link:
Register Microsoft Key’s & Install Their repos:
cd /tmp wget -q sudo dpkg -i packages-microsoft-prod.deb sudo add-apt-repository universe sudo apt -y install apt-transport-https sudo apt update sudo apt -y install dotnet-sdk-2.2 
Microsoft collect telemetry data by default, if you are part of the “tin foil hat brigade” you can set the following environment variable to turn it off:
echo "DOTNET_CLI_TELEMETRY_OPTOUT=1" >> /etc/environment 
now you should be at a point where .net core is installed on your system… that wasn’t so hard was it! You can check by running the following command:
The output should look like this:
$ dotnet --list-sdks 2.2.103 [/usshare/dotnet/sdk] 

Step 2 - Download & Compile The x42 Node

This part assumes you have GIT installed, if not:
apt -y install git 
Now to pull down the source and compile it!
cd ~/ git clone # “cd” into the source folder cd X42-FullNode/src/ 
Now .net core uses NuGet for package management, before we compile, we need to pull down all of the required packages.. its as simple as running (this will take a couple of minutes) inside of “X42-FullNode/src/”:
dotnet restore 
now we are ready to compile the source, execute (inside of “X42-FullNode/src/”):
dotnet build --configuration Release 
ignore the yellow warnings, this is just the rosyln compiler having a grumble.. if you get red ones then something went wrong! The “--configuration Release” will strip out all debug symbols and slim things down.. only a little, this optional parameter is not mandatory.
Once this is done everything is built/compiled, you can run the daemon directly from the repository, this can be done by going to:
cd ~/X42-FullNode/src/x42.x42D/bin/Release/netcoreapp2.1 dotnet x42.x42D.dll 
this will kick off the node, however if you exit SSH at this time it will kill the process! however I always recommend copying out the binaries to a separate folder. This can be done with the following:
mkdir ~/x42node mv ~/X42-FullNode/src/x42.x42D/bin/Release/netcoreapp2.1/*.* ~/x42node/ 
now we have everything we need to run the node outside the git repository! What we need to do now is run the node and have it create the default x42.conf file.. so
cd ~/x42node dotnet x42.x42D.dll 
feel free to hit “CTRL + C” to exit the application after a couple of seconds, by then the folders/files would have been created at the following path:

Step 3 - Setting The x42 Node Up To Run on Boot

Now we are going to create a service file so our x42 node automatically starts when the system is rebooted.
THINGS TO NOTE ABOUT BELOW.. CHANGE THE ##USER## to the username your currently using as these files are within your home directory!
We need to drop to root for this..
sudo -i cat < /etc/systemd/system/x42node.service [Unit] Description=x42 Node [Service] WorkingDirectory=/home/##USER##/x42node ExecStart=/usbin/dotnet /home/##USER##/x42node/x42.x42D.dll Restart=always # Restart service after 10 seconds if the dotnet service crashes: RestartSec=10 SyslogIdentifier=x42node User=##USER## Environment=ASPNETCORE_ENVIRONMENT=Development [Install] EOF 
To enable the service, run the following (as the root user):
systemctl enable x42node.service 
BOOM.. the node isn’t running yet.. but next time the system restarts it will automatically run!
now lets exit out of root!
We can now start the node up and begin downloading blocks, by running the following command:
sudo systemctl start x42node.service 
if you want to check its loaded and see some of the output, you can run:
sudo systemctl status x42node.service 
an example of the output:
$ sudo systemctl status x42node.service ● x42node.service - x42 Node Loaded: loaded (/etc/systemd/system/x42node.service; enabled; vendor preset: enabled) Active: active (running) since Thu 2019-01-24 15:47:55 UTC; 14s ago Main PID: 5456 (dotnet) Tasks: 23 (limit: 1112) CGroup: /system.slice/x42node.service └─5456 /usbin/dotnet /home/darthnoodle/x42node/x42.x42D.dll Jan 24 15:48:09 x42staking x42node[5456]: Batch Size: 0 Mb (0 headers) Jan 24 15:48:09 x42staking x42node[5456]: Cache Size: 0/50 MB Jan 24 15:48:09 x42staking x42node[5456]: Jan 24 15:48:09 x42staking x42node[5456]: =======Mempool======= Jan 24 15:48:09 x42staking x42node[5456]: MempoolSize: 0 DynamicSize: 0 kb OrphanSize: 0 Jan 24 15:48:09 x42staking x42node[5456]: Jan 24 15:48:09 x42staking x42node[5456]: info: Stratis.Bitcoin.Connection.ConnectionManagerBehavior[0] Jan 24 15:48:09 x42staking x42node[5456]: Peer '[::ffff:]:52342' connected (outbound), agent 'x42:1.2.13 (70012)', height 213920 Jan 24 15:48:09 x42staking x42node[5456]: info: Stratis.Bitcoin.Connection.ConnectionManagerBehavior[0] Jan 24 15:48:09 x42staking x42node[5456]: Peer '[::ffff:]:52342' offline, reason: 'Receiving cancelled.'. All node screen output can be found in the /valog/syslog file. 

Step 4 - Setup a New Wallet

With the Node running, we now need to setup and/or restore a wallet!
Everything will be performed through the API’s, however by default these API’s are listening on localhost (, if you are connecting in remotely then this would be a problem since you cant hit that IP. The solution, SSH TUNNEL!
Execute the following command on your local system:
ssh -L 42220:localhost:42220 @ 
This binds the local port (on your system) with on the remote system, once you have executed the command you can type the following address in your laptop/desktop’s web browser and be able to access the API’s: 
It should look something like this:
To Create a new wallet, first we have to generate some mnemonic works (e.g. the seed), you can do that by going to the following API:
Hit the “Try it out” button which then prompts you for 2 fields:
Enter “English” and I would recommend 24 words as this greatly increases the seed strength! Once that is done you hit execute and then scroll down to see the “Response Body”, this should contain the mnemonic which you are going to use to create the wallet! This looks something like below:
So now we have our mnemonic, its time to generate the wallet, for this we need to use the API:
There are a number of parameters which are required in order to create a wallet:
WalletCreationRequest{ mnemonic string password* string passphrase* string name* string } 
It should be noted that the password and mnemonic are is the most important parts of this request where the “password” will encrypt the wallet and Is required to unlock it.
  • Hit the “Try it out” button
  • input the necessary data
  • Insert the mnemonic
  • Put a password & passphrase
  • “Name” is what your wallet will be called
It should look something like the following:
Hit “Execute”, the “Loading” sign may spin for a few minutes while the wallet is created… once the wallet has been created the “Response Body” will return the mnemonic you have just used.. we now have a wallet!!
This is where we will now jump back out and to configure the node to automatically load the wallet and automatically start staking when it first loads.

Step 5 - Configure The x42 Daemon

Now we are going to modify the x42.conf file in order to automatically load our wallet and start staking 😊
First things first, lets stop our node by running the following command:
sudo systemctl stop x42node.service 
CD to the following folder and view its contents:
~/.x42node/x42/x42Main ls -lah 
within that folder there should be 2 files you are interested in:
-rw-r--r-- 1 darthnoodle darthnoodle 18K Jan 28 16:01 TestWallet.wallet.json -rw-rw-r-- 1 darthnoodle darthnoodle 3.1K Jan 24 15:25 x42.conf 
So TestWallet.wallet.json is our physical wallet that will be loaded, but for right now we want to modify the x42.conf file.. fire up your favourite text editor (if you use VI you’re a masochist)..
nano x42.conf 
The area we are interested in is the following:
####Miner Settings#### #Enable POW mining. #mine=0 #Enable POS. #stake=0 #The address to use for mining (empty string to select an address from the wallet). #mineaddress= #The wallet name to use when staking. #walletname= #Password to unlock the wallet. #walletpassword= #Maximum block size (in bytes) for the miner to generate. #blockmaxsize=1000000 #Maximum block weight (in weight units) for the miner to generate. #blockmaxweight=1000000 #Enable splitting coins when staking. #enablecoinstakesplitting=1 #Minimum size of the coins considered for staking, in satoshis. #minimumstakingcoinvalue=10000000 #Targeted minimum value of staking coins after splitting, in satoshis. #minimumsplitcoinvalue=10000000000 
Uncomment (remove the #) of the following lines and change their value:
stake=1 (changed to 1) walletname=TestWallet (changed to our Wallet Name) walletpassword=password123 (changed to the wallet password) 
save the file and exit back to the command prompt, now we shall restart the node with the following command:
sudo systemctl status x42node.service 
now the wallet is automatically loaded and ready for action!
You can check its loaded by going back to the API and executing the following command:
Or execute the following command on the NODE:
curl -X GET "" -H "accept: application/json" 
both will produce the same output, if you scroll to the bottom you should see something like this:
======Wallets====== TestWallet/account 0, Confirmed balance: 0.00000000 Unconfirmed balance: 0.00000000 
This means the wallet is loaded and ready for action!!

Step 6 - Get Addresses

Next thing you are probably going to want is a receive address and to check the balance and TX history.. so lets start with getting an address!
Go to the following API:
Fill in the Wallet name which is “TestWallet” (in this example) and “account 0” (which is the first/default account):
Hit execute and you should have an x42 address within the “Response Body”:
BOOM… ok now we can receive funds! 😊

Step 7 - Check TX History

Go to the API and the following call:
The 2 fields we are most concerned about are:
Input the name of the wallet and account you want to view the history of, then hit execute. The other fields can be black. This will return a list of TX’s that the wallet has received:
This should look like the following:
There is an easier way of doing this, that doesn’t require you to be connected to your node.. especially if your only interested in viewing your staking rewards… THE EXPLORER!
Access the following URL: 
this will allow you to easily see all TX’s associated with this address, it should look something like below:
… and your done! By this point your node should be running, staking and you have an easy way to view transactions/rewards 😊

Step 8 - Connect The UI Wallet To A Headless Node

The UI utilises a combination of technologies, however the important part is the code attempts to access the x42 Node API on
So you have 2 options here:
  1. Download the Wallet Installers
  2. Compile The UI Yourselves
Pick the option that best suits you given the pros/cons below:
Option 1 - Pro's/Cons
  • If you use the installer, its quick and easy.
  • This also installs an x42 node on your system which runs when the UI loads.
  • If you dont setup an SSH tunnel before running the wallet the local node will bind to the port and the tunnel wont work.. you will be connecting to the local wallet!!
Option 2 - Pro's/Cons
  • You only run the UI, the x42 node is not installed
  • you dont have a superfluous node running, downloading blocks on your local system
  • Time Consuming
  • Have to download dependencies and manually compile the code

Pre-Requirement - Needed For Both Options!!
As previously mentioned, the UI attempts to access the API's on, however our node isnt running on our local system. IN ORDER TO GET IT WORKING YOU NEED TO HAVE AN SSH TUNNEL, THIS TUNNEL NEEDS TO REMAIN ACTIVE WHENEVER YOU WANT TO ACCESS THE WALLET.
this can be done by executing the following command:
ssh -L 42220:localhost:42220 @ 

Step 8 - [Option 1 - Use Installer] Connect The UI Wallet To A Headless Node

Download and install the UI/Wallet & Node from:

Those of us who dont want to run a local node and just want the UI, execute the following commands (as an administrator):
cd C:\Program Files\x42 Core\resources\daemon\ ren x42.x42D.exe x42.x42D.exe.bak 
The above is with Windows, if your are in *NIX then locate the daemon and rename it (i will update how to do that/where to find it shortly)
Setup the SSH tunnel as outlined above, Execute the wallet and it will load, however you will see an exception:
dont worry, this is just the wallet trying to execute/start the x42 node which we dont want, if all works according to plan.. after you click "OK" you should now be presented with the wallet UI and have the option to select what wallet you would like to load:
... DONE!

Step 8 - [Option 2 - Build/Compile UI Only] Connect The UI Wallet To A Headless Node ###BROKEN


Ok, this is the fun bit! .. we need to install the following dependencies. these instructions are written for a Windows system but it should be easy enough to perform the same on a *NIX system.
Install Dependencies
In order to build the wallet UI, you need to install the following components:
  • git
  • NodeJS
  • Electron Builder
First thing you need to do is install git, so download and install the package:
Next you need to install NodeJS, download and install the package:
Next we need to install the node package manager:
npm install npx –verbose 
next we need to make sure we have Visual Studio build tools and Python (2.7) installed, this can be done by executing the following (AS AN ADMINISTRATOR!):
npm install -g --production windows-build-tools 
this will install the necessary tools to build C#/C++ code and python 2.7, this could take some time! When its done you should have something like the following;

Build & Install - Windows
Create a temp folder to navigate to a folder where you want to download the GIT repository, execute the following command:
git clone 
This will clone the repository into the folder, it will only clone the wallet and not the Node source! now lets CD into the folder and build the UI:
cd X42-FullNode-UI\FullNode.UI npm install 
This will download and install all dependencies (can take a while), at the end you should see something like..
Now the stock UI has a number of third-party libraries which contain some vulnerabilities, being a security conscious person, ive also run:
npm audit fix 
when this is done, we have fixed most of the package vulnerabilities 😊 We also get a complaint about the typescript library being too new for the version of angular in use, so run the following command to install the additional dependency:
npm install [email protected]">=2.4.2 <2.7.0" 
now its time to build the UI, execute the following:
npm run build:prod 
once complete you should see something like the following..
Next its time to compile the electron binary, it should be noted that the build/package process utilises AppVoyer which is not installed and if you attempt to build right now you will get the following error:
cannot expand pattern "${productName}-v${version}-setup-${os}-${env.arch}.${ext}": env arch is not defined. 
To fix this we need to modify the build file, this is a quick one liner that can do it:
powershell -Command "(gc electron-builder.json) -replace 'env.arch', 'arch' | Out-File electron-builder.json" 
Essentially the offending line for Windows is..
"artifactName": "${productName}-v${version}-setup-${os}-${env.arch}.${ext}" 
The build cannot resolve “env.arch”, so the above one liner replaces “env.arch” with “arch” which works 😊
execute the following command:
npx electron-builder build --windows --x64 
At present i get the following error, no matter what i do.. and ive ran out of time to go hunting about.. if anyone has any ideas on how to fix then please post in here or message me on discord:

Happy staking!

If you found this post helpful, then buy me a beer and send a donation to XQXeqrNFad2Uu7k3E9Dx5t4524fBsnEeSw
submitted by D4rthNoodle to x42 [link] [comments]

[Troublethooting] lighttpd webserver not starting after upgrade attempt

Hey everyone, After I tried to upgrade pihole the lighttpd service is not starting properly anymore. The other Pihole services are working properly, DNS services work well - it´s just the UI that is broken

I gathered all info that seem relevant to me below: Unfortunately this is where my knowledge about linux stalls. Could someone help me out anyway? Any help and info is very much appreciated.
I´m willing to learn and understand and could really someone to guide me slightly through the required troubleshooting.

Pihole status
$ pihole status [✓] DNS service is running [✓] Pi-hole blocking is Enabled 
I performed a PiHole repaireconfigure already but this does not change anything.
 $ pihole -r [✓] Root user check .;;,. .ccccc:,. :cccclll:. ..,, :ccccclll. ;ooodc 'ccll:;ll .oooodc .;cll.;;looo:. .. ','. .',,,,,,'. .',,,,,,,,,,. .',,,,,,,,,,,,.... ....''',,,,,,,'....... ......... .... ......... .......... .......... .......... .......... ......... .... ......... ........,,,,,,,'...... ....',,,,,,,,,,,,. .',,,,,,,,,'. .',,,,,,'. ..'''. [i] Existing PHP installation detected : PHP version 7.0.33-0+deb9u3 [i] Repair option selected [✓] Disk space check [✓] Update local cache of available packages [✓] Checking apt-get for upgraded packages... up to date! [i] Installer Dependency checks... [✓] Checking for apt-utils [✓] Checking for dialog [✓] Checking for debconf [✓] Checking for dhcpcd5 [✓] Checking for git [✓] Checking for iproute2 [✓] Checking for whiptail [i] Performing reconfiguration, skipping download of local repos [✓] Resetting repository within /etc/.pihole... [✓] Resetting repository within /vawww/html/admin... [i] Main Dependency checks... [✓] Checking for cron [✓] Checking for curl [✓] Checking for dnsutils [✓] Checking for iputils-ping [✓] Checking for lsof [✓] Checking for netcat [✓] Checking for psmisc [✓] Checking for sudo [✓] Checking for unzip [✓] Checking for wget [✓] Checking for idn2 [✓] Checking for sqlite3 [✓] Checking for libcap2-bin [✓] Checking for dns-root-data [✓] Checking for resolvconf [✓] Checking for libcap2 [✓] Checking for lighttpd [✓] Checking for php7.0-common [✓] Checking for php7.0-cgi [✓] Checking for php7.0-sqlite3 [✓] Enabling lighttpd service to start on reboot... [i] FTL Checks... [✓] Detected ARM-hf architecture (armv7+) [i] Checking for existing FTL binary... [i] Latest FTL Binary already installed (v4.2.3). Confirming Checksum... [i] Checksum correct. No need to download! [✓] Checking for user 'pihole' [✓] Installing scripts from /etc/.pihole [i] Installing configs from /etc/.pihole... [i] Existing dnsmasq.conf found... it is not a Pi-hole file, leaving alone! [✓] Copying 01-pihole.conf to /etc/dnsmasq.d/01-pihole.conf [i] Installing blocking page... [✓] Creating directory for blocking page, and copying files [✗] Backing up index.lighttpd.html No default index.lighttpd.html file found... not backing up [✓] Installing sudoer file [✓] Installing latest Cron script [✓] Installing latest logrotate script [i] Backing up /etc/dnsmasq.conf to /etc/dnsmasq.conf.old [✓] man pages installed and database updated [i] Testing if systemd-resolved is enabled [i] Systemd-resolved is not enabled [i] Restarting lighttpd service.. 
Apparently the issue is somewhere within the lighttpd service
When I tried to restart the lighttpd service the following shows:
$ sudo service lighttpd force-reload Job for lighttpd.service failed because the control process exited with error code. See "systemctl status lighttpd.service" and "journalctl -xe" for details. 

When checking the status, this is the error message.
$ systemctl status lighttpd.service -l ● lighttpd.service - Lighttpd Daemon Loaded: loaded (/lib/systemd/system/lighttpd.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Tue 2019-03-19 12:13:26 CET; 23s ago Process: 5294 ExecStartPre=/ussbin/lighttpd -tt -f /etc/lighttpd/lighttpd.conf (code=exited, status=255) Mar 19 12:13:25 raspberrypi systemd[1]: lighttpd.service: Unit entered failed state. Mar 19 12:13:25 raspberrypi systemd[1]: lighttpd.service: Failed with result 'exit-code'. Mar 19 12:13:26 raspberrypi systemd[1]: lighttpd.service: Service hold-off time over, scheduling restart. Mar 19 12:13:26 raspberrypi systemd[1]: Stopped Lighttpd Daemon. Mar 19 12:13:26 raspberrypi systemd[1]: lighttpd.service: Start request repeated too quickly. Mar 19 12:13:26 raspberrypi systemd[1]: Failed to start Lighttpd Daemon. Mar 19 12:13:26 raspberrypi systemd[1]: lighttpd.service: Unit entered failed state. Mar 19 12:13:26 raspberrypi systemd[1]: lighttpd.service: Failed with result 'exit-code'. 

Additional info:
$ journalctl -xe -- Unit lighttpd.service has failed. -- -- The result is failed. Mar 19 12:29:38 raspberrypi systemd[1]: lighttpd.service: Unit entered failed state. Mar 19 12:29:38 raspberrypi systemd[1]: lighttpd.service: Failed with result 'exit-code'. Mar 19 12:29:38 raspberrypi systemd[1]: lighttpd.service: Service hold-off time over, scheduling restart. Mar 19 12:29:38 raspberrypi systemd[1]: Stopped Lighttpd Daemon. -- Subject: Unit lighttpd.service has finished shutting down -- Defined-By: systemd -- Support: -- -- Unit lighttpd.service has finished shutting down. Mar 19 12:29:38 raspberrypi systemd[1]: Starting Lighttpd Daemon... -- Subject: Unit lighttpd.service has begun start-up -- Defined-By: systemd -- Support: -- -- Unit lighttpd.service has begun starting up. Mar 19 12:29:38 raspberrypi lighttpd[5922]: Duplicate array-key '.php' Mar 19 12:29:38 raspberrypi lighttpd[5922]: 2019-03-19 12:29:38: (configfile.c.1154) source: /etc/lighttpd/conf-enabled/15-fastcgi-php.conf line: 21 pos: 1 parser failed someh Mar 19 12:29:38 raspberrypi lighttpd[5922]: 2019-03-19 12:29:38: (configfile.c.1154) source: find /etc/lighttpd/conf-enabled -name '*.conf' -a ! -name 'letsencrypt.conf' -prin Mar 19 12:29:38 raspberrypi lighttpd[5922]: 2019-03-19 12:29:38: (configfile.c.1154) source: /etc/lighttpd/lighttpd.conf line: 56 pos: 1 parser failed somehow near here: (EOL) Mar 19 12:29:38 raspberrypi systemd[1]: lighttpd.service: Control process exited, code=exited status=255 Mar 19 12:29:38 raspberrypi systemd[1]: Failed to start Lighttpd Daemon. -- Subject: Unit lighttpd.service has failed -- Defined-By: systemd -- Support: -- -- Unit lighttpd.service has failed. -- -- The result is failed. Mar 19 12:29:38 raspberrypi systemd[1]: lighttpd.service: Unit entered failed state. Mar 19 12:29:38 raspberrypi systemd[1]: lighttpd.service: Failed with result 'exit-code'. Mar 19 12:29:39 raspberrypi systemd[1]: lighttpd.service: Service hold-off time over, scheduling restart. Mar 19 12:29:39 raspberrypi systemd[1]: Stopped Lighttpd Daemon. -- Subject: Unit lighttpd.service has finished shutting down -- Defined-By: systemd -- Support: -- -- Unit lighttpd.service has finished shutting down. Mar 19 12:29:39 raspberrypi systemd[1]: Starting Lighttpd Daemon... -- Subject: Unit lighttpd.service has begun start-up -- Defined-By: systemd -- Support: -- -- Unit lighttpd.service has begun starting up. Mar 19 12:29:39 raspberrypi lighttpd[5931]: Duplicate array-key '.php' Mar 19 12:29:39 raspberrypi lighttpd[5931]: 2019-03-19 12:29:39: (configfile.c.1154) source: /etc/lighttpd/conf-enabled/15-fastcgi-php.conf line: 21 pos: 1 parser failed someh Mar 19 12:29:39 raspberrypi lighttpd[5931]: 2019-03-19 12:29:39: (configfile.c.1154) source: find /etc/lighttpd/conf-enabled -name '*.conf' -a ! -name 'letsencrypt.conf' -prin Mar 19 12:29:39 raspberrypi lighttpd[5931]: 2019-03-19 12:29:39: (configfile.c.1154) source: /etc/lighttpd/lighttpd.conf line: 56 pos: 1 parser failed somehow near here: (EOL) Mar 19 12:29:39 raspberrypi systemd[1]: lighttpd.service: Control process exited, code=exited status=255 Mar 19 12:29:39 raspberrypi systemd[1]: Failed to start Lighttpd Daemon. -- Subject: Unit lighttpd.service has failed -- Defined-By: systemd -- Support: -- -- Unit lighttpd.service has failed. -- -- The result is failed. Mar 19 12:29:39 raspberrypi systemd[1]: lighttpd.service: Unit entered failed state. Mar 19 12:29:39 raspberrypi systemd[1]: lighttpd.service: Failed with result 'exit-code'. Mar 19 12:29:39 raspberrypi systemd[1]: lighttpd.service: Service hold-off time over, scheduling restart. Mar 19 12:29:39 raspberrypi systemd[1]: Stopped Lighttpd Daemon. -- Subject: Unit lighttpd.service has finished shutting down -- Defined-By: systemd -- Support: -- -- Unit lighttpd.service has finished shutting down. Mar 19 12:29:39 raspberrypi systemd[1]: lighttpd.service: Start request repeated too quickly. Mar 19 12:29:39 raspberrypi systemd[1]: Failed to start Lighttpd Daemon. -- Subject: Unit lighttpd.service has failed -- Defined-By: systemd -- Support: -- -- Unit lighttpd.service has failed. -- -- The result is failed. Mar 19 12:29:39 raspberrypi systemd[1]: lighttpd.service: Unit entered failed state. Mar 19 12:29:39 raspberrypi systemd[1]: lighttpd.service: Failed with result 'exit-code'. Mar 19 12:30:01 raspberrypi cron[319]: (*system*pihole) RELOAD (/etc/cron.d/pihole) Mar 19 12:30:01 raspberrypi CRON[5940]: pam_unix(cron:session): session opened for user root by (uid=0) Mar 19 12:30:01 raspberrypi CRON[5944]: (root) CMD ( PATH="$PATH:/uslocal/bin/" pihole updatechecker local) Mar 19 12:30:01 raspberrypi CRON[5940]: pam_unix(cron:session): session closed for user root lines 2273-2335/2335 (END) 

FTR: My system is:
Linux raspberrypi 4.14.98-v7+ #1200 SMP Tue Feb 12 20:27:48 GMT 2019 armv7l GNU/Linux $ lighttpd -v lighttpd/1.4.45 (ssl) - a light and fast webserver Build-Date: Jan 14 2017 21:07:19 
I even installed lighttpd and reinstalled it via the pihole setup but the issue remains.

submitted by banthonnet to pihole [link] [comments]

Request for community assistance in distro/strata acquisition strategies

The high-level goal for the next release is to make Bedrock Linux easier to try out. There are two broad steps for this:
That latter item, sadly, cannot be done in a generalized fashion. We'll need some logic for each distro (or possibly family of related distros) we're interested in. This adds up to a lot of time consuming work. Luckily, this work is easily parallelizable across different people! Instead of further delaying the next release waiting for me to read up on a bunch of distros I don't know, or limiting the usefulness of the next release by skipping supporting them, I thought it best to reach out for others for help here. Odds are good ya'll know some distros better than I do.
Here's what I'm looking for:
  1. Some way to check if the distro supports the current machine's architecture (e.g. x86_64)
    • Presumably compare the supported options against uname -m, maybe after mapping it if it's in another format.
  2. Some way to list a distro's available releases, if that makes sense for the given distro.
    • If there's a way to filter it down to only currently supported releases, that would be ideal.
    • If the release has a number of names/aliases, all of them would be of value. This way a user can specify the name in any format and we'll grab it.
  3. Some way to indicate which release should be considered the default selected one if none is specified, if that makes sense for the given distro.
  4. Some way to get a list of supported mirrors.
  5. Given a distro, release, and mirror, some way to get the distro's "base" files into a specified directory.
  6. Whatever steps are necessary to set up the previously selected mirror for the package manager, if that makes sense for the distro.
  7. Whatever steps are necessary to update/upgrade the now on-disk files, in case the above step grabbed files which need updates.
  8. Whatever steps are necessary to set up the distro's locales, given the current locale
  9. Any tweaks needed to make it work well with Bedrock Linux.
What makes this tricky are some constraints we'll need to use:
Some quick and dirty examples:
Arch Linux:
  1. Arch Linux only supports x86_64.
  2. Rolling release, no need to list releases.
  3. Rolling release, no need to determine default release.
  4. The official mirrors are available at which can be trivially downloaded and parsed
  5. Use a bootstrap tarball provided in the various mirrors to set up an environment for pacstrap, then use pacstrap to acquire the files
    • Given a mirror, we can find an HTML index page at $MIRROiso/latest/ which contains a file in the form archlinux-bootstrap--x86_64.tar.gz. We can download and untar this to some temporary location
    • Add the mirror to the temp location's /etc/pacman.d/mirrorlist
    • chroot to the temp location and run /usbin/pacman-key --init && /usbin/pacman-key --populate archlinux.
    • chroot to the temp location and run pacstrap
    • kill the gpg-agent the above steps spawn and remove temp location.
    • chroot to the stratum and run /usbin/pacman-key --init && /usbin/pacman-key --populate archlinux.
    • kill the gpg-agent the above step spawns
  6. Add the mirror to the stratum's /etc/pacman.d/mirrorlist
  7. pacman -Syu
  8. Append locale to stratum's /etc/locale.gen and run locale-gen.
  9. Comment out Checkspace from /etc/pacman.conf, as Bedrock Linux's bind mounts confuse it. Include a comment explaining this in case users read that config.
  1. Parse, map to uname -m values, compare against uname -m.
  2. Given a mirror, look at:
    • The codename and version fields in /dists/oldstable/Release
    • The codename and version fields in /dists/stable/Release
    • The codename and version fields in /dists/testing/Release
    • Unstable/Sid, no version number.
  3. Default release is stable from above.
  4. Parse
  5. Use busybox utilities to download the package list and calculate packages needed to run debootstrap. Download those, extract them, then use those to run debootstrap.
    • Download /dists//main/binary-/Packages.gz
    • Parse Packages.gz for debootstrap's dependencies.
      • Packages.gz is a relatively simple format. This is doable, if slow, in busybox shell/awk.
    • wget the dependencies from the mirror and extract them to temp location
      • Busybox can extract .deb files.
    • chroot to temp and debootstrap stratum
  6. Add lines to /etc/apt/sources.list as needed
  7. apt update && apt upgrade
  8. Install locales-all.
  9. None needed.
Ubuntu and Devuan will likely be very similar, but they'll need some specifics. Ubuntu won't have oldstable/stable/testing/sid, for example, and they'll both need different mirrors.
Void Linux:
  1. Download index page from mirror then look at filenames, compare against uname -m.
  2. Rolling release, no need to list releases.
  3. Rolling release, no need to determine default release.
  4. Parse
  5. Get static build of xbps package manager from special mirror. Use to bootstrap stratum.
  6. Not needed
  7. xbps-install -Syu
  8. Write locale to stratum's /etc/default/libc-locales and run xbps-reconfigure -f glibc-locales
  9. None needed.
I'm thinking of making void-musl a separate "distro" from void for the purposes of the UI here, unless someone has a better idea. It'll be almost identical under-the-hood, just it'll look at a slightly different mirror location.
One way to go about researching this is to look for instructions on setting up the distro in a chroot, or to bootstrap the distro. Many distros have documentation like this or this.
Don't feel obligated to actually fully script something up for these. Some of that effort may go to waste if someone comes up with another strategy, or if some code could be shared across multiple strata. Just enough for someone else to write up such a script should suffice for now. It would be good if you tried to follow the steps you're describing manually, though, just to make sure they do actually work and you're not missing something.
In addition to coming up with these items for distros I haven't covered and improving strategies for distros we already have, there's value in thinking of other things which could be useful that we might need per distro. Is there anything I'm forgetting which should be added to the per-distro list of information we need?
I know a lot of people have said they would be interested in contributing, but don't know enough low-level Linux nitty-gritty to code something up. This may be a good way to contribute that might be more accessible.
submitted by ParadigmComplex to bedrocklinux [link] [comments]

The Tyranny of the Minimum Viable User

In addressing shortcomings of a major web browser recently, I tossed out a neologism for a neologistic age: Minimum viable user.
This describes the lowest-skilled user a product might feasibly accommodate, or if you're business-minded, profitably accommodate. The hazard being that such an MVU then drags down the experience for others, and in particular expert or experienced users. More to follow.
There are cases where reasonable accommodations should be considered, absolutely. Though how this ought be done is also critical. And arbitrary exclusions for nonfunctional reasons -- the term for that is "discrimination", should you ask -- are right out.
Accessibility accommodations, in physical space and informational systems, is a key concern. I don't generally require these myself, but know many people who do, and have come to appreciate their concerns. I've also come to see both the increased imposition, and benefits, this offers by way of accommodating the needs.
It's often underappreciated how increased accessibility helps many, often all, users of a product or space. A classic instance would be pavement (or sidewalk) kerb cuts -- bringing the edge of a walkway to street level, rather than leaving a 10 cm ridge. This accommodates not just wheelchairs, but dollies, carts, wheeled luggage, and more. Benefits which materialised only after deployment, beyond the original intent.

Accessibility and Information Systems

For information systems -- say, webpages -- the accommodations which are most useful for perceptually-challenged users are also almost always beneficial to others: clear, high-contrast layouts. Lack of distracting screen elements. A highly semantic structure makes work easier for both screen-readers (text-to-speech) and automated parsing or classification of content. Clear typography doesn't fix all copy, but it makes bad copy all the more apparent. Again, positive externalities.
When we get to the point of process-oriented systems, the picture blurs. The fundamental problem is that an interface which doesn't match the complexity of the underlying task is always going to be unsatisfactory. Larry Wall has observed this with regard to the Perl programming language: complexity will out. In landscape design, the problem is evidenced by the term "desire path". A disagreement between use and design.[1]
At its heart, a desire path is the failure for designer to correctly anticipate, or facilitate, the needs and desires of their users. Such paths reflect emergent practices or patterns, some constructive, some challenging the integrity of a system. Mastodon Tootstorms are an example of a positive creative accommodation. Mostly.
On other services, the lack of an ability to otherwise dismiss content frequently creates an overload of the spam or abuse reporting mechanism. G+ comes to mind. If a side-effect of reporting content is that it is removed from my view, and there is no other way to accomplish that goal, then the reporting feature becomes the "remove from visibility" function. I've ... had that conversation with Google for a number of years. Or is that a monologue...
Software programming is in many ways a story of side-effects and desire paths, as is the art of crafting system exploits. PHP seems particularly prone to this, though I can't find the character-generating hack I've in mind.
There's the question of when a system should or shouldn't be particularly complex. Light switches and water taps are a case in point. The first has operated as a simple binary, the second as a variable-rate flow control, and the basic functionality has remained essentially unchanged for a century or more. Until the Internet of Broken Shit that Spies on you wizkids got ahold of them.... And modulo some simple management interfaces: timers or centralised large-building controls.
Simple tasks benefit from simple controls.
Complex tasks ... also benefit from simple controls, but no simpler than the task at hand.
A good chef, for example, needs only a modicum of basic elements. A good knife. A reliable cooktop and oven. A sink. A cutting surface. Mixing bowls. Underappreciated: measuring equipment. Measuring spoons, cups, pitchers. A scale. Thermometer. Timers. The chef also may have call for some specific processing equipment: cutting, chopping, blending, grating, and mixing tools. Powering these increases throughput, but the essential controls remain simple. And some specialised tools, say, a frosting tube, but which generally share common characteristics: they're individually simple, do one thing, usually a basic transformation, and do it well.
The complexity of the process is in the chef, training, and practice.
The antithesis of this is "cooking gadgets" -- tools or appliances which are complicated, fussy, achieve a single and non-general result, or which integrate (or attempt to do so) a full process. This is the stuff that clutters counter space and drawers: useless kitchen gadgets. A category so egregious it defies even simple listing, though you're welcome to dig through search results.
If you can only use it on one recipe, it's bad mkay?

Appropriateness of Single-use Tools: Safety equipment

On single-use tools: if that single use is saving your life in conditions of readily forseeable peril, then it may well be worth having. Lifeboats. Seatbelts. First aid kit.
That gets down to a risk assessment and mitigation calculation problem though, which may be error-prone: over- and under-estimating risks, and/or the efficacy of mitigations. Pricing risk and risk-as-economic good is another long topic.

Lifts, Telephones, and Automobiles

There are times when you absolutely should be aiming for the minimum viable user. Anything that sees widespread shared public use, for example. I shouldn't have to read the user manual to figure out how to open the front door to your building. Automatic, sensored doors, would be an entirely MVU product.
I've mentioned lifts, automobiles, and telephones. Each is highly complex conceptually, two can maim or kill. All can be relatively safely used by most adults, even children. A large part of what makes lifts, automobiles, and telephones so generally usable is that the controls are very highly standardised. Mostly. The exceptions become newsworthy.
Telephones have deviated from this with expansion of mobile and even more complex landline devices. And the specific case of business-oriented office telephones has been for at least 30 years, a strong counterexample, worth considering.

Office Phone Systems

It takes me a year or more to figure out a new office phone system. If ever. A constant for 30 years. This wasn't the case as of the 1980s, when a standard POTS-based phone might have five buttons, and the smarts were in a PBX generally located within the building.
By the 1990s, though, "smart phones" were starting to appear. Rolm was one early vendor I recall. These had an increasing mix of features, not standardised either across or within vendor lines, but generally some mix of:
  1. Voicemail
  2. Call forwarding
  3. Call conferencing
  4. Lots of other random shit to inflate marketing brochures
Feature #4 was a major problem, but the underlying one was, and remains, I think, the mismatch of comms channels and cognitive capacities a phone represents: audio, physical, textual, and short-term working memory.
The physical interface of most phones -- and I'm referring to desk sets here -- is highly constrained. There's a keypad, generally 12 buttons (not even enough for the impoverished Roman alphabet, let alone more robust ones), possibly an additional set of function buttons, and a handset, plus some base. Cords.
More advanced phonesets have perfected the technology of including a display for text which is simultaneously unreadable under any lighting conditions, viewing angles, or capable of providing useful information in any regard. This another engineering accomplishment with a decades-long record.
Phones are relatively good for talking, but they are miserable for communication. Reflected by millennials disdain for making phone calls Millennials prefer text-based apps to voice comms, as do numerous tech early-adopters. I suspect the reason is both the state-maintenance and fragility of phone-based communications.
I'm distinguishing talking -- a longer and wandering conversation with a friend -- and communicating -- the attempt to convey or obtain some specific task-oriented or process-oriented information. The salient difference is that the latter is very strongly goal oriented, the former, not so much. That is, a "simple" phone conversation is a complex interaction and translation between visual, textual, audio, physical, and memory systems. It's also conducted without the visual cues of face-to-face communications (as are all remote comms), for further fun and games. This usually makes conversations with someone you know well (for whom you can impute those cues) generally far more straightforward than with a stranger, especially for complex discussions.
The upshot is that while a telephone is reasonably simple to use in the basic case -- establish a voice connection with another device generally associated with a person or business -- it actually fails fairly profoundly in the surrounding task context for numerous reasons. Many of which boil down to an interface which is simultaneously oversimplified and poorly suited to the task at hand.
Smartphones, and software-based telephony systems in general, followed the business phone lead.
Mobile comms generally have expanded on failures of business phone systems in poor usability as phones by significantly deteriorating audio quality and dynamics -- constraints of packet-switching, compression, additional relay hops, and speed-of-light delays have boosted noise and lag to the level of interfering with the general flow of conversation. Which isn't particularly an interface failure as such (this is channel behaviour), but it encourages the shift to text of millennials.
I'll save the question of how to fix voice comms for discussion.
The point I'm making is that even an apparently straightforward device and task, with a long engineering history, can find itself ill-matched to new circumstances.
There's also much path-dependence here. Lauren Weinstein on G+ enjoys digging up old AT&T engineering and marketing and/or propaganda newsreels describing development of the phone system: direct-dial, switching, 7-digit, area-code, long-distance, touch-tone. There were real and legitimate design, engineering, and use considerations put into each of these. It's not as if the systems were haphazardly put together. This still doesn't avoid the net result being a bit of a hash.
An appreciation of why Mr. Chesterton built his fence , and whether or not that rationale remains valid, is useful to keep in mind. As are path-dependencies, 2nd-system effects, and late-adopter advantages. Those building out interdependent networks after initial trial often have a significant advantage.
It's also interesting to consider what the operating environment of earlier phones was -- because it exceeded the device itself.
A business-use phone of, say, the 1970s, existed in a loosely-integrated environment comprising:
Critically: these components operated simultaneously and independently of the phone.
A modern business, software, or smartphone system may offer some, or even all, of these functions, but frequently:
The benefits are that they are generally cheaper, smaller, more portable, and create digital data which may be, if accessible to other tools, more flexible.
But enough of phones.

The Unix Philosophy

The Unix Philosophy reads: "Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface."
It offers a tremendous amount of mileage.

Exceptions to the Unix Philosophy: Complexity Hubs

I want to talk about the apparent exceptions to the Unix philosophy: shells, editors, email, init (and especially systemd), remote filesystems, udev, firewall rules, security generally, programming languages, GUIs.
Apparently, "exceptions to the Unix philosophy" is very nearly another neologism -- I find a single result in Google, to an essay by Michael O. Church. He adds two more items: IDEs (integrated developer environments), arguably an outgrowth of editors, and databases. Both are solid calls, and both tie directly into the theme I had in mind in the preceding toot.
These are all complexity hubs -- they are loci of either control or interfacing between and among other systems or complex domains:

The GUI Mess

This leaves us with GUIs, or more generally, the concept of the domain of graphics.
The complexity here is that graphics are not text. Or at the very least, transcend text. It is possible to use text to describe graphics, and there are tools which do this: Turtle. Some CAD systems. Scalable vector graphics (SVG). But to get philosophical: the description is not the thing. The end result is visual, and whilst it might be rule-derived, it transcends the rule itself.
One argument is that when you leave the domain of text, you leave the Unix philosophy behind. I think I'm OK with that as a starting premise. This means that visual, audio, mechanical, and other sensory outputs are fundamentally different from text, and that we need to keep in mind that text, whilst powerful, has its limits.
It's also to keep in mind, though, what the characteristics and limits of GUIs themselves are.
Neal Stephenson, "In the Beginning was the Command Line", again, offers one such: Metaphor sheer. Most especially where a GUI is used to represent computer system elements themselves, it's crucial to realise that the representation is not the thing itself -- map-territory confusion. In fact a GUI isn't so much a representation as a remapping of computer state.
Unix, the C programming language, and the bash shell all remain relatively close to machine state. In many cases, the basic Unix commands are wrappers around either C language structures (e.g., printf(1) and printf(3)), or report the content of basic data structures (e.g., stat(1) and stat(2)). Even where the concept is reshaped significantly, you can still generally find the underlying concept present. This may be more foreign for newbies, but as exposure to the system is gained, interface knowledge leverages to system knowledge.
GUIs lose this: represented state has little coherence.
Some argue that not being tied to the mechanism is an advantage -- that this allows the interface designer a freedom to explore expressions independent of the underlying mechanism.
This is true.
But it gets to another set of limitations of GUIs:
Scripting has the effect of constraining, for better or worse, changes to interfaces because scripts have to be updated as features change. The consequence is that tools either don't change arguments, change them with exceedingly long advance warning, or failing either of those, are rapidly discarded by those who use them due to gratuitous interface changes. The result is a strong, occasionally stifling, consistency over time.
The limits on information density and on scaling or scrolling are another factor. A good GUI might offer the ability to expand or compress a view by a few times, but it takes a very creative approach to convey the orders of magnitude scales which, say, a physical library does. Data visualisation is its own specialty, and some are good at it.
The result is that most GUI interfaces are good for a dozen, perhaps a few dozens, objects.
Exceptions to this are telling. xkcd is on the money: This chart manages to show values from $1to $2.39 quadrillion ($2.39 thousand million million), within the same visualisation, a span of 15 orders of magnitude, by using a form of logarithmic scaling. This is possible, but it is difficult to do usefully or elegantly.

GUIs, Efficiency, and Change

Change aversion and inherent limits to GUI productivity interact to create the final conflict for GUIs: the potential for interface efficiency is limited and change is disruptive, you lose for trying. Jamie "jwz" Zawinski notes this:
Look, in the case of all other software, I believe strongly in "release early, release often". Hell, I damned near invented it. But I think history has proven that UI is different than software.
What jwz doesn't do is explain why this is, and I'm not aware of others who have.
This also shows up in the case of Apple, a company which puts a premium on design and UI, but which is exceedingly conservative in changing UI. The original Mac desktop stuck with its initial motif from 1984 until 2001: 17 years. It successor has changed only incrementally from 2001 to 2017, very nearly as long. Even Apple realise: you don't fuck with the GUI.
This suggests an underlying failure of the Linux desktop effort isn't a failure to innovate, but rather far too much churn in the desktop.
My daily driver for 20 years has been Window Maker, itself a reimplementation of the 1989 NeXT desktop. Which is to say that a 30 year-old design works admirably. It's fast, stable, doesn't change unexpectedly with new releases or updates, and gets the fuck out of the way. It has a few customisations which tend to focus on function rather than form.

The Minimum Viable User GUI and Its Costs

Back to my starting premise: let's assume, with good reason, that the Minimum Viable User wants and needs a simple, largely pushbutton, heavily GUI, systems interface.
What does this cost us?
The answer is in the list of Unix Philosophy Violating Tasks:

Just Who is the Minimum Viable User?

A central question, and somewhat inexcusably buried at this point in my essay, is who is the Minimum Viable User? This could be the lowest level of system skills capable of using a device, which an OECD survey finds is abysmally bad. Over half the population, and over 2/3 in most surveyed industrialised countries, have poor, "below poor", or no computer skills at all.
I'm moving past this point quickly, but recommend very strongly reading Jacob Nielsen's commentary on this study, and the study itself: "Skills Matter: Further Results from the Survey of Adult Skills" (OECD, 2016). The state of typical user skills is exceedingly poor. If you're reading this essay, you're quite likely not among them, though if you are, the comment is simply meant without disparagement as a statement of fact: from high to low, the range of user computer skills is enormous, with the low end of the range very highly represented in the general population. People who, largely, otherwise function quite well in society: they have jobs, responsibilities, families.
This has profound implications for futures premised on any sort of general technical literacy. As William Ophuls writes in Plato's Revenge, social systems based on the premise that all the children are above average are doomed to failure.
The main thrust of this essay though is a different concern. Global information systems which are premised on a minimal-or-worse level of sophistication by all users also bodes poorly, though for different reasons: it hampers the capabilities of that small fraction -- 5-8% or less, and yes, quite probably far less -- of the population who can make highly productive use of such tools, by producing hardware and software which fails to support advanced usage.
It does this by two general modes:
The dynamics are also driven by market and business considerations -- where the money is, and how development, shipping, and maintaining devices relates to cash flows.

The Problem-Problem Problem

One business response is to extend the MVU definition to that of the Minimum Viable-Revenue User: services are targeted at those with the discretionary income, or lack of alternatives, to prove attractive to vendors.
There's been well-founded criticism of Silicon Valley startups which have lost track of what a meaningful problem in need of solution. It's a problem problem. Or: The problem-problem problem.
Solving Minor Irritations of Rich People, or better, inventing MIoRP, as a bootstrapping method, has some arguable utility. Telsa Motors created a fun, but Very ExpensiveTM , electrified Lotus on its way to creating a viable, practical, battery-powered, Everyman vehicle. Elon Musk is a man who has made me a liar multiple times, by doing what I unequivocally stated was impossible, and he impresses the hell out of me for it.
Amazon reinvented Sears, Roebuck, & Co. for the 21st century bootstrapped off a books-by-mail business.
I'm not saying there ain't a there there. But I'm extremely unconvinced that all the there there that's claimed to be there is really there.
Swapping out the phone or fax in a laundry, food-delivery, dog-walking, or house-cleaning business is not, in the larger scheme of things, particularly disruptive. It's often not even a particularly good business when catering to the Rich and Foolish. Not that parting same from their easily-won dollars isn't perhaps a laudable venture.
The other slant of the Minimum Viable User is the one who is pushed so far up against the wall, or fenced in and the competition fenced out, that they've no option but to use your service. Until such time as you decide to drag them off the plane. Captive-market vendor-customer relationship dynamics are typically poor.
For numerous reasons, the design considerations which go into such tools are also rarely generative. Oh: Advertising is one of those domains. Remember: Advertising breeds contempt.
Each of these MVU business cases argues against designing for the generative user. A rather common failing of market-based capitalism.
Robert Nozick explains criticism of same by creatives by the fact that "by and large, a capitalist society does not honor its intellectuals". A curious argument whose counterpoint is "capitalism is favoured by those whom it does unduly reward".
That's solipsistic.
Pointing this out is useful on a number of counts. It provides a ready response to the Bullshit Argument that "the market decides". Because what becomes clear is that market forces alone are not going to do much to encourage generative-use designs. Particularly not in a world of zero-marginal-cost products. That is: products whose marginal costs are small (and hence: pricing leverage), but with high fixed costs. And that means that the market is going to deliver a bunch of shitty tools.

Getting from Zero to One for Generative Mobile Platforms

Which suggests one of a few possible avenues out of the dilemma: a large set of generative tools have been built through non-capitalistic organisation. The Free Software / Open Source world would be a prime case in point, but it's hardly the first. Scientific research and collaboration, assembly of reference tools, dictionaries, encyclopedias. That's an option.
Though they need some sort of base around which to form and organise. And in the case of software they need hardware.
For all the evil Bill Gates unleashed upon the tech world (a fair bit of it related to the MVU and MFVU concepts themselves), he also unleashed a world of i386 chipset systems on which other software systems could be developed. Saw to it that he individually and specifically profited from every one sold, mind. But he wasn't able to restrict what ran on those boxes post-delivery.
GNU/Linux may well have needed Bill Gates. (And Gates may well have not been able to avoided creating Linux.)
There are more smartphones and Android devices today than there ever were PCs, but one area of technical advance over the decades has been in locking systems down. Hard. And, well, that's a problem.
I don't think it's the only one, though.
Commodity x86 hardware had a model for the operating system capable of utilising it which already existed: Unix. Linus Torvalds may have created Linux, but he didn't design it as such. That template had been cut already. It was a one-to-two problem, a question of scaling out. Which is to say it wasn't a Zero to One problem.
And yes, Peter Thiel is an evil asshat, which is why I'm pointing you specifically at where to steal his book. That's not to say he isn't an evil asshat without the occasional good idea.
I'm not sure that finding (and building) the Open Mobile Device Environment is a Zero to One problem -- Google, well, Android Inc., leveraged Linux, after all. But the design constraints are significantly different.
A standalone PC workstation is much closer to a multi-user Unix server in most regards, and particularly regards UI/UX, than is a mobile device measuring 25, or 20, or 12, or 8 cm. Or without any keyboard. Or screen. And a certain set of tools and utilities must be created.
It's not as if attempts haven't been made, but they simply keep not getting anywhere. Maemo. FirefoxOS. Ubuntu Phone. Hell, the Psion and Palm devices weren't bad for what they did.
Pick one, guys & gals. Please.

The Mobile Applications Ecosystem is Broken

There's also the question of apps, and app space, itself. By one school of thought, a large count of available applications is a good thing. By another, it's a sign of failure of convergence. As of 2017, there are 2.5 million Google Play apps.
Is it even worth the search time? Is meaningful search of the space even possible?
The question occurs: is it really in Google's interest to proliferate applications which are separate, non-integrated, split development efforts, and often simply perform tasks poorly?
Why not find a way to focus that development effort to producing some truly, insanely, great apps?
The consequences are strongly reminiscent of the spyware and adware problem of desktop Windows in the early 2000s. For the same reason: competitive software development incentivises bad behaviour and poor functionality. It's the Barbarians at the Gate all over again. With so many independent development efforts, and such an inefficient communications channel to potential users, as well as poor revenue potential through kosher methods, the system is inherently incentivised to exceedingly user-hostile behaviour.
A valid counterargument would be to point to a set of readily-found, excellent, well-designed, well-behaved, user-centric tools fulfilling fundamental uses mentioned in my G+ post. But this isn't the case. Google's Play Store is an abject failure from a user perspective. And catering to the MVU carries a large share of the blame.
I'm not saying there should be only one of any given application either -- some choice is of value. Most Linux distributions will in fact offer a number of options for given functionality, both as shell or programming tools (where modular design frequently makes these drop-in replacements, down to syntax), and as GUI tools.
Whilst "freedom to fork" is a touted advantage of free software, "capacity to merge" is even more salient. Different design paths may be taken, then rejoined.
There's another line of argument about web-based interfaces. I'll skip much of that noting that the issues parallel much of the current discussion. And that the ability to use alternate app interfaces or browser site extensions is critical. Reddit and Reddit User Suite, by Andy Tuba, are prime exemplars of excellence in this regard.

Related Reading

A compilation of articles reflecting this trend.


Yes, this is a lot of words to describe the concept generally cast as "the lowest common denominator". I'm not claiming conceptual originality, but terminological originality. Additionally:
This post was adapted from an earlier Mastodon Tootstorm.


  1. Reddit fans of the concept might care to visit /DesirePaths.
submitted by dredmorbius to dredmorbius [link] [comments]

distro comparison from a technical standpoint

Sorry if this is offtopic, but from all communities, arch is proabably the best suited for this question. So every time I try to find out something about a linux distro (this time around opensuse) all I can find on the internet is "look, this distro now has a fantastic (insert random ui gimmicky window manager feature, available on any distro with that window manager), and is consuming 289 mb of ram." which is pretty much useless. The very distro site will give the distro's vision (we aim for a stable and reliable blahblahblah) I want to know why the /etc/environment may be empty or not. Does it use systemd? Is it a binary distribution? What's in and what's not in /usr ? What's the precompiling options on the basic packages? any aliases out of the box? is it UTF-8? I want architectural differences, and so far, picking up bits on info on forum rants has been the only (semi reliable and time consuming) way. Am I missing something? Thanks and kudos in advance.
submitted by zeta27 to archlinux [link] [comments]

Bitcoin Core 0.10.0 released | Wladimir | Feb 16 2015

Wladimir on Feb 16 2015:
Bitcoin Core version 0.10.0 is now available from:
This is a new major version release, bringing both new features and
bug fixes.
Please report bugs using the issue tracker at github:
The whole distribution is also available as torrent:
Upgrading and downgrading

How to Upgrade
If you are running an older version, shut it down. Wait until it has completely
shut down (which might take a few minutes for older versions), then run the
installer (on Windows) or just copy over /Applications/Bitcoin-Qt (on Mac) or
bitcoind/bitcoin-qt (on Linux).
Downgrading warning
Because release 0.10.0 makes use of headers-first synchronization and parallel
block download (see further), the block files and databases are not
backwards-compatible with older versions of Bitcoin Core or other software:
  • Blocks will be stored on disk out of order (in the order they are
received, really), which makes it incompatible with some tools or
other programs. Reindexing using earlier versions will also not work
anymore as a result of this.
  • The block index database will now hold headers for which no block is
stored on disk, which earlier versions won't support.
If you want to be able to downgrade smoothly, make a backup of your entire data
directory. Without this your node will need start syncing (or importing from
bootstrap.dat) anew afterwards. It is possible that the data from a completely
synchronised 0.10 node may be usable in older versions as-is, but this is not
supported and may break as soon as the older version attempts to reindex.
This does not affect wallet forward or backward compatibility.
Notable changes

Faster synchronization
Bitcoin Core now uses 'headers-first synchronization'. This means that we first
ask peers for block headers (a total of 27 megabytes, as of December 2014) and
validate those. In a second stage, when the headers have been discovered, we
download the blocks. However, as we already know about the whole chain in
advance, the blocks can be downloaded in parallel from all available peers.
In practice, this means a much faster and more robust synchronization. On
recent hardware with a decent network link, it can be as little as 3 hours
for an initial full synchronization. You may notice a slower progress in the
very first few minutes, when headers are still being fetched and verified, but
it should gain speed afterwards.
A few RPCs were added/updated as a result of this:
  • getblockchaininfo now returns the number of validated headers in addition to
the number of validated blocks.
  • getpeerinfo lists both the number of blocks and headers we know we have in
common with each peer. While synchronizing, the heights of the blocks that we
have requested from peers (but haven't received yet) are also listed as
  • A new RPC getchaintips lists all known branches of the block chain,
including those we only have headers for.
Transaction fee changes
This release automatically estimates how high a transaction fee (or how
high a priority) transactions require to be confirmed quickly. The default
settings will create transactions that confirm quickly; see the new
'txconfirmtarget' setting to control the tradeoff between fees and
confirmation times. Fees are added by default unless the 'sendfreetransactions'
setting is enabled.
Prior releases used hard-coded fees (and priorities), and would
sometimes create transactions that took a very long time to confirm.
Statistics used to estimate fees and priorities are saved in the
data directory in the fee_estimates.dat file just before
program shutdown, and are read in at startup.
New command line options for transaction fee changes:
  • -txconfirmtarget=n : create transactions that have enough fees (or priority)
so they are likely to begin confirmation within n blocks (default: 1). This setting
is over-ridden by the -paytxfee option.
  • -sendfreetransactions : Send transactions as zero-fee transactions if possible
(default: 0)
New RPC commands for fee estimation:
  • estimatefee nblocks : Returns approximate fee-per-1,000-bytes needed for
a transaction to begin confirmation within nblocks. Returns -1 if not enough
transactions have been observed to compute a good estimate.
  • estimatepriority nblocks : Returns approximate priority needed for
a zero-fee transaction to begin confirmation within nblocks. Returns -1 if not
enough free transactions have been observed to compute a good
RPC access control changes
Subnet matching for the purpose of access control is now done
by matching the binary network address, instead of with string wildcard matching.
For the user this means that -rpcallowip takes a subnet specification, which can be
  • a single IP address (e.g. or fe80::0012:3456:789a:bcde)
  • a network/CIDR (e.g. or fe80::0000/64)
  • a network/netmask (e.g. or fe80::0012:3456:789a:bcde/ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff)
An arbitrary number of -rpcallow arguments can be given. An incoming connection will be accepted if its origin address
matches one of them.
For example:
| 0.9.x and before | 0.10.x |
| -rpcallowip= | -rpcallowip= (unchanged) |
| -rpcallowip=192.168.1.* | -rpcallowip= |
| -rpcallowip=192.168.* | -rpcallowip= |
| -rpcallowip=* (dangerous!) | -rpcallowip=::/0 (still dangerous!) |
Using wildcards will result in the rule being rejected with the following error in debug.log:
 Error: Invalid -rpcallowip subnet specification: *. Valid are a single IP (e.g., a network/netmask (e.g. or a network/CIDR (e.g. 
REST interface
A new HTTP API is exposed when running with the -rest flag, which allows
unauthenticated access to public node data.
It is served on the same port as RPC, but does not need a password, and uses
plain HTTP instead of JSON-RPC.
Assuming a local RPC server running on port 8332, it is possible to request:
In every case, EXT can be bin (for raw binary data), hex (for hex-encoded
binary) or json.
For more details, see the doc/ document in the repository.
RPC Server "Warm-Up" Mode
The RPC server is started earlier now, before most of the expensive
intialisations like loading the block index. It is available now almost
immediately after starting the process. However, until all initialisations
are done, it always returns an immediate error with code -28 to all calls.
This new behaviour can be useful for clients to know that a server is already
started and will be available soon (for instance, so that they do not
have to start it themselves).
Improved signing security
For 0.10 the security of signing against unusual attacks has been
improved by making the signatures constant time and deterministic.
This change is a result of switching signing to use libsecp256k1
instead of OpenSSL. Libsecp256k1 is a cryptographic library
optimized for the curve Bitcoin uses which was created by Bitcoin
Core developer Pieter Wuille.
There exist attacks[1] against most ECC implementations where an
attacker on shared virtual machine hardware could extract a private
key if they could cause a target to sign using the same key hundreds
of times. While using shared hosts and reusing keys are inadvisable
for other reasons, it's a better practice to avoid the exposure.
OpenSSL has code in their source repository for derandomization
and reduction in timing leaks that we've eagerly wanted to use for a
long time, but this functionality has still not made its
way into a released version of OpenSSL. Libsecp256k1 achieves
significantly stronger protection: As far as we're aware this is
the only deployed implementation of constant time signing for
the curve Bitcoin uses and we have reason to believe that
libsecp256k1 is better tested and more thoroughly reviewed
than the implementation in OpenSSL.
Watch-only wallet support
The wallet can now track transactions to and from wallets for which you know
all addresses (or scripts), even without the private keys.
This can be used to track payments without needing the private keys online on a
possibly vulnerable system. In addition, it can help for (manual) construction
of multisig transactions where you are only one of the signers.
One new RPC, importaddress, is added which functions similarly to
importprivkey, but instead takes an address or script (in hexadecimal) as
argument. After using it, outputs credited to this address or script are
considered to be received, and transactions consuming these outputs will be
considered to be sent.
The following RPCs have optional support for watch-only:
getbalance, listreceivedbyaddress, listreceivedbyaccount,
listtransactions, listaccounts, listsinceblock, gettransaction. See the
RPC documentation for those methods for more information.
Compared to using getrawtransaction, this mechanism does not require
-txindex, scales better, integrates better with the wallet, and is compatible
with future block chain pruning functionality. It does mean that all relevant
addresses need to added to the wallet before the payment, though.
Consensus library
Starting from 0.10.0, the Bitcoin Core distribution includes a consensus library.
The purpose of this library is to make the verification functionality that is
critical to Bitcoin's consensus available to other applications, e.g. to language
bindings such as [python-bitcoinlib]( or
alternative node implementations.
This library is called (or, .dll for Windows).
Its interface is defined in the C header [bitcoinconsensus.h](
In its initial version the API includes two functions:
  • bitcoinconsensus_verify_script verifies a script. It returns whether the indicated input of the provided serialized transaction
correctly spends the passed scriptPubKey under additional constraints indicated by flags
  • bitcoinconsensus_version returns the API version, currently at an experimental 0
The functionality is planned to be extended to e.g. UTXO management in upcoming releases, but the interface
for existing methods should remain stable.
Standard script rules relaxed for P2SH addresses
The IsStandard() rules have been almost completely removed for P2SH
redemption scripts, allowing applications to make use of any valid
script type, such as "n-of-m OR y", hash-locked oracle addresses, etc.
While the Bitcoin protocol has always supported these types of script,
actually using them on mainnet has been previously inconvenient as
standard Bitcoin Core nodes wouldn't relay them to miners, nor would
most miners include them in blocks they mined.
It has been observed that many of the RPC functions offered by bitcoind are
"pure functions", and operate independently of the bitcoind wallet. This
included many of the RPC "raw transaction" API functions, such as
bitcoin-tx is a newly introduced command line utility designed to enable easy
manipulation of bitcoin transactions. A summary of its operation may be
obtained via "bitcoin-tx --help" Transactions may be created or signed in a
manner similar to the RPC raw tx API. Transactions may be updated, deleting
inputs or outputs, or appending new inputs and outputs. Custom scripts may be
easily composed using a simple text notation, borrowed from the bitcoin test
This tool may be used for experimenting with new transaction types, signing
multi-party transactions, and many other uses. Long term, the goal is to
deprecate and remove "pure function" RPC API calls, as those do not require a
server round-trip to execute.
Other utilities "bitcoin-key" and "bitcoin-script" have been proposed, making
key and script operations easily accessible via command line.
Mining and relay policy enhancements
Bitcoin Core's block templates are now for version 3 blocks only, and any mining
software relying on its getblocktemplate must be updated in parallel to use
libblkmaker either version 0.4.2 or any version from 0.5.1 onward.
If you are solo mining, this will affect you the moment you upgrade Bitcoin
Core, which must be done prior to BIP66 achieving its 951/1001 status.
If you are mining with the stratum mining protocol: this does not affect you.
If you are mining with the getblocktemplate protocol to a pool: this will affect
you at the pool operator's discretion, which must be no later than BIP66
achieving its 951/1001 status.
The prioritisetransaction RPC method has been added to enable miners to
manipulate the priority of transactions on an individual basis.
Bitcoin Core now supports BIP 22 long polling, so mining software can be
notified immediately of new templates rather than having to poll periodically.
Support for BIP 23 block proposals is now available in Bitcoin Core's
getblocktemplate method. This enables miners to check the basic validity of
their next block before expending work on it, reducing risks of accidental
hardforks or mining invalid blocks.
Two new options to control mining policy:
  • -datacarrier=0/1 : Relay and mine "data carrier" (OP_RETURN) transactions
if this is 1.
  • -datacarriersize=n : Maximum size, in bytes, we consider acceptable for
"data carrier" outputs.
The relay policy has changed to more properly implement the desired behavior of not
relaying free (or very low fee) transactions unless they have a priority above the
AllowFreeThreshold(), in which case they are relayed subject to the rate limiter.
BIP 66: strict DER encoding for signatures
Bitcoin Core 0.10 implements BIP 66, which introduces block version 3, and a new
consensus rule, which prohibits non-DER signatures. Such transactions have been
non-standard since Bitcoin v0.8.0 (released in February 2013), but were
technically still permitted inside blocks.
This change breaks the dependency on OpenSSL's signature parsing, and is
required if implementations would want to remove all of OpenSSL from the
consensus code.
The same miner-voting mechanism as in BIP 34 is used: when 751 out of a
sequence of 1001 blocks have version number 3 or higher, the new consensus
rule becomes active for those blocks. When 951 out of a sequence of 1001
blocks have version number 3 or higher, it becomes mandatory for all blocks.
Backward compatibility with current mining software is NOT provided, thus miners
should read the first paragraph of "Mining and relay policy enhancements" above.
0.10.0 Change log

Detailed release notes follow. This overview includes changes that affect external
behavior, not code moves, refactors or string updates.
  • f923c07 Support IPv6 lookup in bitcoin-cli even when IPv6 only bound on localhost
  • b641c9c Fix addnode "onetry": Connect with OpenNetworkConnection
  • 171ca77 estimatefee / estimatepriority RPC methods
  • b750cf1 Remove cli functionality from bitcoind
  • f6984e8 Add "chain" to getmininginfo, improve help in getblockchaininfo
  • 99ddc6c Add nLocalServices info to RPC getinfo
  • cf0c47b Remove getwork() RPC call
  • 2a72d45 prioritisetransaction
  • e44fea5 Add an option -datacarrier to allow users to disable relaying/mining data carrier transactions
  • 2ec5a3d Prevent easy RPC memory exhaustion attack
  • d4640d7 Added argument to getbalance to include watchonly addresses and fixed errors in balance calculation
  • 83f3543 Added argument to listaccounts to include watchonly addresses
  • 952877e Showing 'involvesWatchonly' property for transactions returned by 'listtransactions' and 'listsinceblock'. It is only appended when the transaction involves a watchonly address
  • d7d5d23 Added argument to listtransactions and listsinceblock to include watchonly addresses
  • f87ba3d added includeWatchonly argument to 'gettransaction' because it affects balance calculation
  • 0fa2f88 added includedWatchonly argument to listreceivedbyaddress/...account
  • 6c37f7f getrawchangeaddress: fail when keypool exhausted and wallet locked
  • ff6a7af getblocktemplate: longpolling support
  • c4a321f Add peerid to getpeerinfo to allow correlation with the logs
  • 1b4568c Add vout to ListTransactions output
  • b33bd7a Implement "getchaintips" RPC command to monitor blockchain forks
  • 733177e Remove size limit in RPC client, keep it in server
  • 6b5b7cb Categorize rpc help overview
  • 6f2c26a Closely track mempool byte total. Add "getmempoolinfo" RPC
  • aa82795 Add detailed network info to getnetworkinfo RPC
  • 01094bd Don't reveal whether password is <20 or >20 characters in RPC
  • 57153d4 rpc: Compute number of confirmations of a block from block height
  • ff36cbe getnetworkinfo: export local node's client sub-version string
  • d14d7de SanitizeString: allow '(' and ')'
  • 31d6390 Fixed setaccount accepting foreign address
  • b5ec5fe update getnetworkinfo help with subversion
  • ad6e601 RPC additions after headers-first
  • 33dfbf5 rpc: Fix leveldb iterator leak, and flush before gettxoutsetinfo
  • 2aa6329 Enable customising node policy for datacarrier data size with a -datacarriersize option
  • f877aaa submitblock: Use a temporary CValidationState to determine accurately the outcome of ProcessBlock
  • e69a587 submitblock: Support for returning specific rejection reasons
  • af82884 Add "warmup mode" for RPC server
  • e2655e0 Add unauthenticated HTTP REST interface to public blockchain data
  • 683dc40 Disable SSLv3 (in favor of TLS) for the RPC client and server
  • 44b4c0d signrawtransaction: validate private key
  • 9765a50 Implement BIP 23 Block Proposal
  • f9de17e Add warning comment to getinfo
Command-line options:
  • ee21912 Use netmasks instead of wildcards for IP address matching
  • deb3572 Add -rpcbind option to allow binding RPC port on a specific interface
  • 96b733e Add -version option to get just the version
  • 1569353 Add -stopafterblockimport option
  • 77cbd46 Let -zapwallettxes recover transaction meta data
  • 1c750db remove -tor compatibility code (only allow -onion)
  • 4aaa017 rework help messages for fee-related options
  • 4278b1d Clarify error message when invalid -rpcallowip
  • 6b407e4 -datadir is now allowed in config files
  • bdd5b58 Add option -sysperms to disable 077 umask (create new files with system default umask)
  • cbe39a3 Add "bitcoin-tx" command line utility and supporting modules
  • dbca89b Trigger -alertnotify if network is upgrading without you
  • ad96e7c Make -reindex cope with out-of-order blocks
  • 16d5194 Skip reindexed blocks individually
  • ec01243 --tracerpc option for regression tests
  • f654f00 Change -genproclimit default to 1
  • 3c77714 Make -proxy set all network types, avoiding a connect leak
  • 57be955 Remove -printblock, -printblocktree, and -printblockindex
  • ad3d208 remove -maxorphanblocks config parameter since it is no longer functional
Block and transaction handling:
  • 7a0e84d ProcessGetData(): abort if a block file is missing from disk
  • 8c93bf4 LoadBlockIndexDB(): Require block db reindex if any blk*.dat files are missing
  • 77339e5 Get rid of the static chainMostWork (optimization)
  • 4e0eed8 Allow ActivateBestChain to release its lock on cs_main
  • 18e7216 Push cs_mains down in ProcessBlock
  • fa126ef Avoid undefined behavior using CFlatData in CScript serialization
  • 7f3b4e9 Relax IsStandard rules for pay-to-script-hash transactions
  • c9a0918 Add a skiplist to the CBlockIndex structure
  • bc42503 Use unordered_map for CCoinsViewCache with salted hash (optimization)
  • d4d3fbd Do not flush the cache after every block outside of IBD (optimization)
  • ad08d0b Bugfix: make CCoinsViewMemPool support pruned entries in underlying cache
  • 5734d4d Only remove actualy failed blocks from setBlockIndexValid
  • d70bc52 Rework block processing benchmark code
  • 714a3e6 Only keep setBlockIndexValid entries that are possible improvements
  • ea100c7 Reduce maximum coinscache size during verification (reduce memory usage)
  • 4fad8e6 Reject transactions with excessive numbers of sigops
  • b0875eb Allow BatchWrite to destroy its input, reducing copying (optimization)
  • 92bb6f2 Bypass reloading blocks from disk (optimization)
  • 2e28031 Perform CVerifyDB on pcoinsdbview instead of pcoinsTip (reduce memory usage)
  • ab15b2e Avoid copying undo data (optimization)
  • 341735e Headers-first synchronization
  • afc32c5 Fix rebuild-chainstate feature and improve its performance
  • e11b2ce Fix large reorgs
  • ed6d1a2 Keep information about all block files in memory
  • a48f2d6 Abstract context-dependent block checking from acceptance
  • 7e615f5 Fixed mempool sync after sending a transaction
  • 51ce901 Improve chainstate/blockindex disk writing policy
  • a206950 Introduce separate flushing modes
  • 9ec75c5 Add a locking mechanism to IsInitialBlockDownload to ensure it never goes from false to true
  • 868d041 Remove coinbase-dependant transactions during reorg
  • 723d12c Remove txn which are invalidated by coinbase maturity during reorg
  • 0cb8763 Check against MANDATORY flags prior to accepting to mempool
  • 8446262 Reject headers that build on an invalid parent
  • 008138c Bugfix: only track UTXO modification after lookup
P2P protocol and network code:
  • f80cffa Do not trigger a DoS ban if SCRIPT_VERIFY_NULLDUMMY fails
  • c30329a Add testnet DNS seed of Alex Kotenko
  • 45a4baf Add testnet DNS seed of Andreas Schildbach
  • f1920e8 Ping automatically every 2 minutes (unconditionally)
  • 806fd19 Allocate receive buffers in on the fly
  • 6ecf3ed Display unknown commands received
  • aa81564 Track peers' available blocks
  • caf6150 Use async name resolving to improve net thread responsiveness
  • 9f4da19 Use pong receive time rather than processing time
  • 0127a9b remove SOCKS4 support from core and GUI, use SOCKS5
  • 40f5cb8 Send rejects and apply DoS scoring for errors in direct block validation
  • dc942e6 Introduce whitelisted peers
  • c994d2e prevent SOCKET leak in BindListenPort()
  • a60120e Add built-in seeds for .onion
  • 60dc8e4 Allow -onlynet=onion to be used
  • 3a56de7 addrman: Do not propagate obviously poor addresses onto the network
  • 6050ab6 netbase: Make SOCKS5 negotiation interruptible
  • 604ee2a Remove tx from AlreadyAskedFor list once we receive it, not when we process it
  • efad808 Avoid reject message feedback loops
  • 71697f9 Separate protocol versioning from clientversion
  • 20a5f61 Don't relay alerts to peers before version negotiation
  • b4ee0bd Introduce preferred download peers
  • 845c86d Do not use third party services for IP detection
  • 12a49ca Limit the number of new addressses to accumulate
  • 35e408f Regard connection failures as attempt for addrman
  • a3a7317 Introduce 10 minute block download timeout
  • 3022e7d Require sufficent priority for relay of free transactions
  • 58fda4d Update seed IPs, based on crawler data
  • 18021d0 Remove from dnsseeds.
  • 6fd7ef2 Also switch the (unused) verification code to low-s instead of even-s
  • 584a358 Do merkle root and txid duplicates check simultaneously
  • 217a5c9 When transaction outputs exceed inputs, show the offending amounts so as to aid debugging
  • f74fc9b Print input index when signature validation fails, to aid debugging
  • 6fd59ee script.h: set_vch() should shift a >32 bit value
  • d752ba8 Add SCRIPT_VERIFY_SIGPUSHONLY (BIP62 rule 2) (test only)
  • 698c6ab Add SCRIPT_VERIFY_MINIMALDATA (BIP62 rules 3 and 4) (test only)
  • ab9edbd script: create sane error return codes for script validation and remove logging
  • 219a147 script: check ScriptError values in script tests
  • 0391423 Discourage NOPs reserved for soft-fork upgrades
  • 98b135f Make STRICTENC invalid pubkeys fail the script rather than the opcode
  • 307f7d4 Report script evaluation failures in log and reject messages
  • ace39db consensus: guard against openssl's new strict DER checks
  • 12b7c44 Improve robustness of DER recoding code
  • 76ce5c8 fail immediately on an empty signature
Build system:
  • f25e3ad Fix build in OS X 10.9
  • 65e8ba4 build: Switch to non-recursive make
  • 460b32d build: fix broken boost chrono check on some platforms
  • 9ce0774 build: Fix windows configure when using --with-qt-libdir
  • ea96475 build: Add mention of --disable-wallet to bdb48 error messages
  • 1dec09b depends: add shared dependency builder
  • c101c76 build: Add --with-utils (bitcoin-cli and bitcoin-tx, default=yes). Help string consistency tweaks. Target sanity check fix
  • e432a5f build: add option for reducing exports (v2)
  • 6134b43 Fixing condition 'sabotaging' MSVC build
  • af0bd5e osx: fix signing to make Gatekeeper happy (again)
  • a7d1f03 build: fix dynamic boost check when --with-boost= is used
  • d5fd094 build: fix qt test build when libprotobuf is in a non-standard path
  • 2cf5f16 Add libbitcoinconsensus library
  • 914868a build: add a deterministic dmg signer
  • 2d375fe depends: bump openssl to 1.0.1k
  • b7a4ecc Build: Only check for boost when building code that requires it
  • b33d1f5 Use fee/priority estimates in wallet CreateTransaction
  • 4b7b1bb Sanity checks for estimates
  • c898846 Add support for watch-only addresses
  • d5087d1 Use script matching rather than destination matching for watch-only
  • d88af56 Fee fixes
  • a35b55b Dont run full check every time we decrypt wallet
  • 3a7c348 Fix make_change to not create half-satoshis
  • f606bb9 fix a possible memory leak in CWalletDB::Recover
  • 870da77 fix possible memory leaks in CWallet::EncryptWallet
  • ccca27a Watch-only fixes
  • 9b1627d [Wallet] Reduce minTxFee for transaction creation to 1000 satoshis
  • a53fd41 Deterministic signing
  • 15ad0b5 Apply AreSane() checks to the fees from the network
  • 11855c1 Enforce minRelayTxFee on wallet created tx and add a maxtxfee option
  • c21c74b osx: Fix missing dock menu with qt5
  • b90711c Fix Transaction details shows wrong To:
  • 516053c Make links in 'About Bitcoin Core' clickable
  • bdc83e8 Ensure payment request network matches client network
  • 65f78a1 Add GUI view of peer information
  • 06a91d9 VerifyDB progress reporting
  • fe6bff2 Add BerkeleyDB version info to RPCConsole
  • b917555 PeerTableModel: Fix potential deadlock. #4296
  • dff0e3b Improve rpc console history behavior
  • 95a9383 Remove CENT-fee-rule from coin control completely
  • 56b07d2 Allow setting listen via GUI
  • d95ba75 Log messages with type>QtDebugMsg as non-debug
  • 8969828 New status bar Unit Display Control and related changes
  • 674c070 seed OpenSSL PNRG with Windows event data
  • 509f926 Payment request parsing on startup now only changes network if a valid network name is specified
  • acd432b Prevent balloon-spam after rescan
  • 7007402 Implement SI-style (thin space) thoudands separator
  • 91cce17 Use fixed-point arithmetic in amount spinbox
  • bdba2dd Remove an obscure option no-one cares about
  • bd0aa10 Replace the temporary file hack currently used to change Bitcoin-Qt's dock icon (OS X) with a buffer-based solution
  • 94e1b9e Re-work overviewpage UI
  • 8bfdc9a Better looking trayicon
  • b197bf3 disable tray interactions when client model set to 0
  • 1c5f0af Add column Watch-only to transactions list
  • 21f139b Fix tablet crash. closes #4854
  • e84843c Broken addresses on command line no longer trigger testnet
  • a49f11d Change splash screen to normal window
  • 1f9be98 Disable App Nap on OSX 10.9+
  • 27c3e91 Add proxy to options overridden if necessary
  • 4bd1185 Allow "emergency" shutdown during startup
  • d52f072 Don't show wallet options in the preferences menu when running with -disablewallet
  • 6093aa1 Qt: QProgressBar CPU-Issue workaround
  • 0ed9675 [Wallet] Add global boolean whether to send free transactions (default=true)
  • ed3e5e4 [Wallet] Add global boolean whether to pay at least the custom fee (default=true)
  • e7876b2 [Wallet] Prevent user from paying a non-sense fee
  • c1c9d5b Add Smartfee to GUI
  • e0a25c5 Make askpassphrase dialog behave more sanely
  • 94b362d On close of splashscreen interrupt verifyDB
  • b790d13 English translation update
  • 8543b0d Correct tooltip on address book page
  • b41e594 Fix script test handling of empty scripts
  • d3a33fc Test CHECKMULTISIG with m == 0 and n == 0
  • 29c1749 Let tx (in)valid tests use any SCRIPT_VERIFY flag
  • 6380180 Add rejection of non-null CHECKMULTISIG dummy values
  • 21bf3d2 Add tests for BoostAsioToCNetAddr
  • b5ad5e7 Add Python test for -rpcbind and -rpcallowip
  • 9ec0306 Add CODESEPARATOFindAndDelete() tests
  • 75ebced Added many rpc wallet tests
  • 0193fb8 Allow multiple regression tests to run at once
  • 92a6220 Hook up sanity checks
  • 3820e01 Extend and move all crypto tests to crypto_tests.cpp
  • 3f9a019 added list/get received by address/ account tests
  • a90689f Remove timing-based signature cache unit test
  • 236982c Add skiplist unit tests
  • f4b00be Add CChain::GetLocator() unit test
  • b45a6e8 Add test for getblocktemplate longpolling
  • cdf305e Set -discover=0 in regtest framework
  • ed02282 additional test for OP_SIZE in script_valid.json
  • 0072d98 script tests: BOOLAND, BOOLOR decode to integer
  • 833ff16 script tests: values that overflow to 0 are true
  • 4cac5db script tests: value with trailing 0x00 is true
  • 89101c6 script test: test case for 5-byte bools
  • d2d9dc0 script tests: add tests for CHECKMULTISIG limits
  • d789386 Add "it works" test for bitcoin-tx
  • df4d61e Add bitcoin-tx tests
  • aa41ac2 Test IsPushOnly() with invalid push
  • 6022b5d Make script_{valid,invalid}.json validation flags configurable
  • 8138cbe Add automatic script test generation, and actual checksig tests
  • ed27e53 Add coins_tests with a large randomized CCoinViewCache test
  • 9df9cf5 Make SCRIPT_VERIFY_STRICTENC compatible with BIP62
  • dcb9846 Extend getchaintips RPC test
  • 554147a Ensure MINIMALDATA invalid tests can only fail one way
  • dfeec18 Test every numeric-accepting opcode for correct handling of the numeric minimal encoding rule
  • 2b62e17 Clearly separate PUSHDATA and numeric argument MINIMALDATA tests
  • 16d78bd Add valid invert of invalid every numeric opcode tests
  • f635269 tests: enable alertnotify test for Windows
  • 7a41614 tests: allow rpc-tests to get filenames for bitcoind and bitcoin-cli from the environment
  • 5122ea7 tests: fix on windows
  • fa7f8cd tests: remove old pull-tester scripts
  • 7667850 tests: replace the old (unused since Travis) tests with new rpc test scripts
  • f4e0aef Do signature-s negation inside the tests
  • 1837987 Optimize -regtest setgenerate block generation
  • 2db4c8a Fix node ranges in the test framework
  • a8b2ce5 regression test only setmocktime RPC call
  • daf03e7 RPC tests: create initial chain with specific timestamps
  • 8656dbb Port/fix regression test
  • ca81587 Test the exact order of CHECKMULTISIG sig/pubkey evaluation
  • 7357893 Prioritize and display -testsafemode status in UI
  • f321d6b Add key generation/verification to ECC sanity check
  • 132ea9b miner_tests: Disable checkpoints so they don't fail the subsidy-change test
  • bc6cb41 QA RPC tests: Add tests block block proposals
  • f67a9ce Use deterministically generated script tests
  • 11d7a7d [RPC] add rpc-test for http keep-alive (persistent connections)
  • 34318d7 RPC-test based on invalidateblock for mempool coinbase spends
  • 76ec867 Use actually valid transactions for script tests
  • c8589bf Add actual signature tests
  • e2677d7 Fix smartfees test for change to relay policy
  • 263b65e tests: run sanity checks in tests too
  • 122549f Fix incorrect checkpoint data for testnet3
  • 5bd02cf Log used config file to debug.log on startup
  • 68ba85f Updated Debian example bitcoin.conf with config from wiki + removed some cruft and updated comments
  • e5ee8f0 Remove -beta suffix
  • 38405ac Add comment regarding experimental-use service bits
  • be873f6 Issue warning if collecting RandSeed data failed
  • 8ae973c Allocate more space if necessary in RandSeedAddPerfMon
  • 675bcd5 Correct comment for 15-of-15 p2sh script size
  • fda3fed libsecp256k1 integration
  • 2e36866 Show nodeid instead of addresses in log (for anonymity) unless otherwise requested
  • cd01a5e Enable paranoid corruption checks in LevelDB >= 1.16
  • 9365937 Add comment about never updating nTimeOffset past 199 samples
  • 403c1bf contrib: remove getwork-based pyminer (as getwork API call has been removed)
  • 0c3e101 contrib: Added systemd .service file in order to help distributions integrate bitcoind
  • 0a0878d doc: Add new DNSseed policy
  • 2887bff Update coding style and add .clang-format
  • 5cbda4f Changed LevelDB cursors to use scoped pointers to ensure destruction when going out of scope
  • b4a72a7 contrib/linearize: split output files based on new-timestamp-year or max-file-size
  • e982b57 Use explicit fflush() instead of setvbuf()
  • 234bfbf contrib: Add init scripts and docs for Upstart and OpenRC
  • 01c2807 Add warning about the merkle-tree algorithm duplicate txid flaw
  • d6712db Also create pid file in non-daemon mode
  • 772ab0e contrib: use batched JSON-RPC in linarize-hashes (optimization)
  • 7ab4358 Update bash-completion for v0.10
  • 6e6a36c contrib: show pull # in prompt for github-merge script
  • 5b9f842 Upgrade leveldb to 1.18, make chainstate databases compatible between ARM and x86 (issue #2293)
  • 4e7c219 Catch UTXO set read errors and shutdown
  • 867c600 Catch LevelDB errors during flush
  • 06ca065 Fix CScriptID(const CScript& in) in empty script case

Thanks to everyone who contributed to this release:
  • 21E14
  • Adam Weiss
  • Aitor Pazos
  • Alexander Jeng
  • Alex Morcos
  • Alon Muroch
  • Andreas Schildbach
  • Andrew Poelstra
  • Andy Alness
  • Ashley Holman
  • Benedict Chan
  • Ben Holden-Crowther
  • Bryan Bishop
  • BtcDrak
  • Christian von Roques
  • Clinton Christian
  • Cory Fields
  • Cozz Lovan
  • daniel
  • Daniel Kraft
  • David Hill
  • Derek701
  • dexX7
  • dllud
  • Dominyk Tiller
  • Doug
  • elichai
  • elkingtowa
  • ENikS
  • Eric Shaw
  • Federico Bond
  • Francis GASCHET
  • Gavin Andresen
  • Giuseppe Mazzotta
  • Glenn Willen
  • Gregory Maxwell
  • gubatron
  • HarryWu
  • himynameismartin
  • Huang Le
  • Ian Carroll
  • imharrywu
  • Jameson Lopp
  • Janusz Lenar
  • JaSK
  • Jeff Garzik
  • JL2035
  • Johnathan Corgan
  • Jonas Schnelli
  • jtimon
  • Julian Haight
  • Kamil Domanski
  • kazcw
  • kevin
  • kiwigb
  • Kosta Zertsekel
  • LongShao007
  • Luke Dashjr
  • Mark Friedenbach
  • Mathy Vanvoorden
  • Matt Corallo
  • Matthew Bogosian
  • Micha
  • Michael Ford
  • Mike Hearn
  • mrbandrews
  • mruddy
  • ntrgn
  • Otto Allmendinger
  • paveljanik
  • Pavel Vasin
  • Peter Todd
  • phantomcircuit
  • Philip Kaufmann
  • Pieter Wuille
  • pryds
  • randy-waterhouse
  • R E Broadley
  • Rose Toomey
  • Ross Nicoll
  • Roy Badami
  • Ruben Dario Ponticelli
  • Rune K. Svendsen
  • Ryan X. Charles
  • Saivann
  • sandakersmann
  • SergioDemianLerner
  • shshshsh
  • sinetek
  • Stuart Cardall
  • Suhas Daftuar
  • Tawanda Kembo
  • Teran McKinney
  • tm314159
  • Tom Harding
  • Trevin Hofmann
  • Whit J
  • Wladimir J. van der Laan
  • Yoichi Hirai
  • Zak Wilcox
As well as everyone that helped translating on [Transifex](
Also lots of thanks to the website team David A. Harding and Saivann Carignan.
submitted by bitcoin-devlist-bot to bitcoin_devlist [link] [comments]

Mining bitcoin tinggal tidur jalan terus Mining bitcoin tinggal tidur jalan terus Pucciniomycotina: Basidiomiset Renik yang Berkarat BINARY OPTIONS STRATEGY - Easy Binary Options Strategy 2020. Решенно пушу 20к! PUBG LIVE🔴\\LIVIK\\GAMEPLAY\ - YouTube

Binary options trading may have gotten a bad rap because of its all-or-nothing premise, but the high payouts keep traders coming back for more. This is why binary options markets remain strong in various regions around the world, particularly in Asia. To avoid fraud in binary options trading, choose a trustworthy broker. Binary Systems provide business applications to employees, consultants and agents in a simple, fast, flexible and secure manner. The adoption of Citrix XenApp has improved the service offered to the end customer, and reduced costs of operation and maintenance of the technology infrastructure. is an award-winning online trading provider that helps its clients to trade on financial markets through binary options and CFDs. Trading binary options and CFDs on Synthetic Indices is classified as a gambling activity. Remember that gambling can be addictive – please play responsibly. Learn more about Responsible Trading. Some is an award-winning online trading provider that helps its clients to trade on financial markets through binary options and CFDs. Trading binary options and CFDs on Synthetic Indices is classified as a gambling activity. Remember that gambling can be addictive – please play responsibly. Learn more about Responsible Trading. Some systemd makes use of many modern Linux kernel features. Right now, the lower bound on kernel version is set in the ebuild to 2.6.39. In recent versions of sys-kernel/gentoo-sources, there is a convenient way of selecting the mandatory and optional kernel options for systemd (see Kernel/Configuration for further details):

[index] [3206] [13821] [9742] [23615] [8337] [12096] [6094] [19967] [30517] [10335]

Mining bitcoin tinggal tidur jalan terus Mining bitcoin tinggal tidur jalan terus

Best Binary Options Strategy 2020 - 2 Minute Strategy LIVE TRAINING! - Duration: 43:42. BLW Online Trading Recommended for you. 43:42. Homeless Man Buys A Lamborghini - Duration: 14:20. Seminar Daring Series #1 SIL UI Reformasi Pengelolaan Lingkungan Berkelanjutan ... Best Binary Options Strategy 2020 - 2 Minute Strategy LIVE TRAINING! - Duration: 43:42. Best Binary Options Strategy 2020 - 2 Minute Strategy LIVE TRAINING! - Duration: 43:42. BLW Online Trading Recommended for you. 43:42. Best Binary Options Strategy 2020 - 2 Minute Strategy LIVE TRAINING! - Duration: 43:42. BLW Online Trading 137,180 views. 43:42. TOP 4 Best *LIGHT SHOWS EVER* on Got Talent World! Several Binary Options traders in binary market are aware that you can promote a rewarding choice to the Formal execution time (expiration), and so without any risk to obtain… Read through a lot ...

Flag Counter