This design outline concerns the implementation of a protocol for
dynamic routing by port-knocking in the OpenBSD packet filter
pf(4). This protocol is intended for the purpose of protecting virtual
private networks against denial of service (DoS) attacks from without.
This design is intended solely to enhance _availability_ of services
which would otherwise be open to DoS attacks; it is a dynamic routing
protocol and makes _no_ claims to do anything for the _privacy,
integrity_ or _authenticity_ of the traffic payloads. These issues are
properly addressed by transport protocols such as IPSEC and TLS.
The idea is to provide a means by which the existence of any TCP
service may be rendered undetectable by active port-scans and/or
passive traffic flow analyses of TCP/IP routing information in the
headers of packets passing over physical (as opposed to virtual,
i.e. tunnelled) networks.
Only those with a certain specific "need to know" will be able to
direct traffic to those IP addresses which are the ingress points of
protected VPNs. This need-to-know will be conferred by the device of a
one-time, time-limited pre-shared key transmitted in the 32 bit ISN
field of SYN packets used to initiate one or more TCP/IP connections
between certain combinations of host/port.
This design should make possible the implementation of e.g., proxy
servers which automatically track VPN ingress point routing changes
and manage the creation, distribution and use of pre-shared keys on
behalf of clients and servers behind pf(4) "firewalls", and
furthermore, to do this transparently; i.e. without imposing any
procedural requirements on the users, and without modification of the
client/server operating-system or application programs on either side
of the interface.
This in turn will make possible the implementation of services to
dynamically (and non-deterministically, from the point-of-view of
anyone without a VPN connection) change the physical network addresses
of the VPNs' points of ingress, and to do this rapidly and frequently,
whilst automatically distributing the necessary routing changes to
enable the subsequent key generation and distribution described in the
preceeding paragraph.
The design presented here owes a great to the TCP Stealth design of
Julian Kirsch[1]. The difference is only that instead of making the
one-time use of keys dependent on the varying TCP timestamp, which is
not universally implemented, we make the pre-shared key itself
one-time, and we extend the protocol to arbitrarily long sequences of
knocks which may be from more than source address, directed to more
than destination, and may be either synchronous or asynchronous. We
also implement the protocol as a routing mechanism, so making the
existence of services invisible to probes of active attackers as well
as passive ones who merely observe traffic flows (c.f.[1] Sec 3.2,
p10). Another reason for not using the TCP timestamp as a key
modulator is that an attacker who can block the SYN/ACK responses of a
server knock can identify TCP Stealth knocks by the fact that the
retransmissitted SYN packets have the same TCP timestamp.
One good feature of Kirch's design we have not implemented is the
prevention of session hijacking by a man-in-the-middle. This is
achieved by the device of varying the isn-key according to the first
bytes of the payload of the first packet received after the connection
is established. The benefit of this is significant because an attacker
who can intercept TCP handshakes can effect a DoS attack on the client
by hijacking successful knocks, but with TCP Stealth payload
protection the server can safely reject or divert the hijacking
attempts and still allow the genuine client to connect, possibly
through the pfsync peer.
We do not implement this because it requires further changes to the
pf(4) modulate state code path, which would significantly complicate
testing. We have however made the key_type a parameter so this feature
should be added as a second phase development once the basic
functionalty has been well-tested.
The following is an attempt to specify precisely what changes to the
existing pf(4) and related programs are required to implement the
desired functionality. Constructive comments would be much
appreciated.
Objections that this is so-called "security by obscurity" are simply
not valid because the isn-keys have time-limited validity, are
one-time use only, may be made arbitrarily complex and may be chosen
non-deterministically from the point of view of anyone who does not
have access to the protected VPNs, which already implies the required
need-to-know. We are in effect encrypting the destination addresses of
IP traffic with a one-time pad. Using a synchronous four key knock
sequence, for example, even knowing the exact length of the knock
sequence and all of the m possible source addresses and n possible
destination addresses, any would-be attacker will have a chance of far
less than one in 2^128 of correctly guessing the key.
[1] Julian Kirsh, "Improved Kernel-based Port-knocking in Linux",
Munich, 15 August 2014.
=========================================
The implementation will be maintained as a patch to the standard
OpenBSD source tree, affecting the pf(4), pfsync(4), tcpdump(8) and
pfctl(8) programs.
We require the implementation to satisfy the following conditions:
1. The code changes should be _trivially_ proven to not affect
potential security in _any_ way, if the features provided are
not in fact explicitly enabled in the pf(4) configuration.
2. When the features it provides _are_ used, it should be stated
exactly (and verifiably, so with explicitly stated reasons) what
negative security effects they potentially have on the operation
of pf(4).
3. Changes to existing code should be the minimum required to
implement the required functionality, and they should be such
that (a) their operational effects can be easily verified to be
conditional on the explicit enabling of the feature, and (b)
they are absolutely necessary for the implementation of that
feature.
4. A strategy for exhaustively testing _all_ significant conditions
on _all_ the modified code-paths must be laid out in advance of
implementation, and an exhaustive list of test cases developed
as the modifications are added.
The following design satisfies condition (1) because the default
maximum no of isn-keys in the isn_key tree is 0, hence it must be
explicitly set to a value > 0 by an ioctl(2) call, or the appearence
of "set limit isn-keys n" in the ruleset. But the first line of the
rule match testing (see step 9. below) requires the ISN appear in the
isn-keys tree, otherwise the packet is passed by that rule. Hence
unless explicitly enabled, this feature has no effect whatsoever on
any packet routing: all packets are passed as if the rule did not
exist.
Likewise any ioctl(2) operations will fail (see step 6. below) if the
isn-keys table size is found to be zero. Also, since no
isn-key-related pfsync(4) operations will occur if isn-keys is zero
(see step 12. below) and since all new pfctl(8) operations are via
ioctl(2) calls, (see steps 2. & 3. below) there will be no change to
the operation of either pfctl(8) or tcpdump(8), which will not receive
isn-key-related packets from the pfsync i/f. In addition, since the
default maxisnkeytmo timeout is 30s, no keys will affect routing
decisions, or use pf(4) resources for more than 30 seconds, unless
explicitly enabled.
The following design satisfies condition (2) because the first line of
the rule match testing (see step 9. below) requires the keys must all
have dst/src address in the anchor-local isn_key_{dst,src}
table. Therefore the only effect the isn-key rule option can have is
on packets where addresses of both endpoints have been explicitly
added to the respective tables.
Furthermore, since every isn-key is removed from the isn_keys table on
first use, and since connections are deferred until the pfsync(4) peer
ACKs these removals, in normal operation (i.e. with an congestion-free
pfsync(4) physical i/f between the peers), no isn-key will effect the
establishment of more than one TCP connection.
To show that condition (3) is also satisfied, the satisfaction of each
of the requirements 3a and 3b will be noted for each change in turn in
the steps below.
Condition (4) will be satisfied by a testing framework based on qemu
emulations of one or more systems (the test machines) "instrumented"
by debug log messages redirected by syslogd(8) to a pipe program which
writes them to a serial device /dev/cuaXX, from whence they will be
read by the test framework running on the test host monitoring the
associated qdev pipe. The test frame workwill match a certain "test"
prefix with an event code to a particular test event. The test
framework will be able to respond to events by executing programs as
root as necessary to set up configurations, configure interfaces etc,
by writing commands to, and reading output from, a pipe which will
correspond to the stdin/stdout of a root shell on the test
machines. The test framework will also be able to communicate with
arbitrary other programs on the test machines to make certain ioctl(2)
calls, etc, based on input from serial devices via qemu ipies on the
test host. The test framework will also have access to tunnels via
which it can send and receive raw packets on the test network. The
test framework will be scripted by a command language allowing the
specification of stte machines which respond to events and timeouts by
actions and state-change changes. Actions will include the ability to
schedule timeouts, send packets, log test results etc.
The details of the test framework have yet to be specified. For now we
will simply note the facilities that will be required to test the
changes below.
1. Add a new pool and RB trees, in sys/net/pf.c, for isn keys, if and
only if PF_LIMIT_IKS > 0. Fields are:
keyid, proto,
src_add, src_port, dst_add, dst_port, anchor,
keyseq, async, seqno,
isn_key, key_type, timeout, uid, gid
Where src_add and/or dst_add may be specified as addresses are
specified in pf rules, i.e. as table names, route labels, etc.
If keyseq == keyid then
If seqno == 1 then this is a simple key.
Otherwise it's the last in a sequence of seqno knocks
A synchronous knock sequence is made in reverse order of seqno,
Otherwise it's asynchronous and the knocks can be
made in any order, except the last must have
keyseq == keyid
Add pf_isn_key_insert
Add pf_find_isn_key_byid etc.
Add pf_status.isn_keys - pfvar.h line 1415
Add pf_status.maxisnkeytmo - pfvar.h around line 1406
Add pf_status.isnkeyid
Also add ioctls for setting/getting maxisnkeytmo, see step 6
below.
Implementation conditions:
(3a) the allocation of the new pool and RB trees are conditional on
the explicit enabling of the service by setting the
PF_LIMIT_IKS to a non-zero value.
(3b) It is absolutely necessary to store the pre-shared keys in the
pf(4) address space if it is to check for their existence in
filtered packets.
(4) Test framework events corresponding to LOG messages at level
DEBUG with an event identifier. TEST:EVENT:test1.X referring
to this step.
2. Add pfctl.c functions:
Add pfctl.maxisnkeytmo - pfctl_parser.h line 92
Add syntax for maxisnkeytmo at parse.y, around line 678
Add pfctl_{set,load}_maxisnkeytmo to pfctl.c line 1890
void pfctl_set_maxisnkeytmo(struct pfctl *pf, u_int32_t seconds)
int pfctl_load_maxisnkeytmo(struct pfctl *pf, u_int32_t seconds)
Implementation conditions:
(3a) pfctl implements the change via iocrl(2) calls, so by the
condition on step 6. below, the timeout can only be extended
if the limit[PF_LIMIT_IKS] > 0
(3b) The ability to extend the maximum key timeout is a necessary
contingency for the case where exposed transport networks are
congested, possibly because of an ongoing DoS attack flooding
one or more links.
(4) Test framework for running pfctl with arbitrary commands to
load rulesets and test for errors.
3. Add a new limit (pfctl_set_limit) counter:
#define PFIKS_HIWAT 0 /* default isn-key tree max size */
{ "isn-keys", PF_LIMIT_IKS }, /* sbin/pfctl/pfctl.c line 143 */
Implementation conditions:
(3a) This requirement dropped due to circularity.
(3b) It is self-evident that this feature is absolutely necessary.
(4) Test framework for running arbitrary isn-key related ioctl(2) commands to
load rulesets and report results and errors.
4. Add isn-key keyword for matching rules
sbin/pfctl/parse.y line 2395
" " " 1834
Add post-parse checks for:
no multiple use,
only with IPPROTO_TCP,
only with keep-state outgoing rules if SYN_PROXY is used
Add filter_opts.isn_key flag - sbin/pfctl/parse.y line 250
Add pf_rules.isn_key flag - pfvar.h, line 625
u_int8_t isn_key;
Implementation conditions:
(3a) Although rules may be introduced without having explicitly
enabled the feature by setting limit[PF_LIMIT_IKS] > 0, the
setting of the flag has no effect on routing if the feature is
not enabled, as per the first match-test condition of step
9. below.
(3b) It is self-evident that this feature is absolutely necessary.
(4) Test framework for running pfctl with arbitrary commands to
load rulesets and test for errors.
5. Add purge_thread function for clearing isn key tree:
pf_unlink_isn_key pf.c line 1273
pf_free_isn_key
pf_purge_expired_isn_keys
The above functions should either panic, or return immediately if
limit[PF_LIMIT_IKS] == 0.
Implementation conditions:
(3a) If there are no keys in the isn-keys table, then these
functions will return immediately.
(3b) This feature is absolutely necessary because isn-keys are
time-limited, and must be removed from the tree when timed
out to free the limited tree space.
(4) Test framework for running pfctl with arbitrary commands to
load rulesets and test for errors.
Test framework events corresponding to LOG messages at level
DEBUG with an event identifier. TEST:EVENT:test5.X referring
to this step.
6. Add ioctl(2) calls to get/set/clear entries, in groups
In sys/net/pf_ioctl.c:
#define DIOCCLRIKS _IOWR('D', 97, struct pfioc_ik_kill)
#define DIOCGETIK _IOWR('D', 98, struct pfioc_ik)
#define DIOCGETIKS _IOWR('D', 99, struct pfioc_iks)
#define DIOCADDIKS _IOWR('D', 100, struct pfioc_iks)
#define DIOCSETMAXISNKEYTMO _IOWR('D', 101, u_int32_t)
#define DIOCGETMAXISNKEYTMO _IOWR('D', 102, u_int32_t)
Or can we use 51--56?
Always fail any of the above ioctl(2) calls whenever
limit[PF_LIMIT_IKS] == 0
In DIOCADDIKS: the timeouts must be >0 and <= maxisnkeytmo
a simple key shall have seqno == 1 and async == 0
if seqno > 1 then there must be at least seqno - 1
following keys in the input structure and if
the seqno of each of this set are in strictly
descending order from seqno ... 1, then those n
keys will form a single compound knock.
in either case, the keyseq values must all be 0,
and will be filled in and set equal to the keyid
of the first key in the sequence.
async should be 0 or 1 and must be the same for all
keys in a sequence.
If any of the above checks fail, EINVAL is returned without
altering the key tree in any way: i.e. all keys must be
correct, or none will be added.
the keyseq values must all be 0, and will be filled
in and set equal to the keyid of the first key.
Add EACCESS permission checks for new ioctls
Add ioctls for maxisnkeytmo
Add pf_trans_set.maxisnkeytmo around pf_ioctl.c line 130
u_int32_t maxisnkeytmo;
#define PF_TSET_MAXISNKEYTMO 0x10
Add PF_TSET_* case for pf_trans_set_commit() around line 2733
If real uid is non-zero, then only get/add/clr isn-keys with that
particular real uid/gid. Get ruid, rgid from
p_cred->p_r{uid,gid} thus:
uid_t ruid = p->p_cred->p_ruid;
gid_t rgid = p->p_cred->p_rgid;
isn_key->uid = ruid == 0 ? pfik->pfsync_ik->uid : ruid;
isn_key->gid = ruid == 0 ? pfik->pfsync_ik->gid : rgid;
sbin/pfctl/pfctl.c option changes:
Add -F option modifier 'Keys' to flush isn-keys table
Add -s option modifier 'Keys' to show isn keys, line 2376:
Add isn-keys show on 'show all' option.
Implementation conditions:
(3a) The ioctl(2) calls fail if limit[PF_LIMIT_IKS] == 0, and the
extra pfctl(8) options are implemented by these ioctl(2)
calls.
(3b) The ADDIKS ioctl is self-evidently necessary and the CLRIKS
ioctl is necessary to disable the feature. The GETIKS/GETIK
are necessary to find out what keys are currently enabled. The
GET/SETMAXISNKEYTMO ioctls are necessary to allow this to be
changed at run-time without flushing and reloading the entire
pf(4) ruleset.
We do not use the existing mechanism for setting default
timeouts because this is not a default timeout, it is the
_maximum_ timeout.
The -s and -F modifiers are necessary to allow the key table to
be examined and/or flushed quickly and easily.
(4) Test framework for running pfctl with arbitrary commands under
arbitrary real uids/gids (via sudo) to load rulesets and test
for errors.
Test framework events corresponding to LOG messages at level
DEBUG with an event identifier. TEST:EVENT:test6.X referring
to this step.
7. Add reason codes for dropping packets
#define PFRES_PRE_ISN_KEY 16 /* isn-key */
#define PFRES_BAD_ISN_KEY 17 /* bad isn-key */
Implementation conditions (3a) and (3b) and (4) are satisfied where
these codes are used in steps 9. and 10. below.
8. Add field pf_desc.isn_key to keep the ISN of the incoming SYN packet.
Implementation conditions:
(3a) Has no effect in itself, regardless of whether or not the
feature is enabled.
(3b) required for step 10. below.
9. Add isn-key rule matching/key dropping code around pf_test_rule pf.c line 3245
This only works for outgoing TCP connections if they are matched by
isn-key rules which specify SYN_PROXY keep_state, which must then
use exactly this isn-key for the ISN on the server-side of the
connection.
To test packets, look up all isn-keys matching anchor/proto and
where {dst,src}_add are each in the anchor-local
isn_key_{dst,src} table (resp.) Then test each one for detals:
address/uid/gid/etc as follows:
(*) If nothing then
pass
Otherwise
Match incoming connects on dst_add/port and isn_key
Match outgoing connects on dst_add/port and
src_add/port(0 is wildcard) and test that if non-zero,
the uid/gid of the isn-key entry match those of the
src_add/port sockets.
The result of this will be a single key, or nothing
If nothing then
pass
Otherwise
If the matching isn-key has keyid == keyseq then
If either seqno == 1 or this is the only key with this keyseq then
set pf_desc.isn_key to the matching isn_key
match
Otherwise
DEL the entire sequence keyseq == this_keyseq && keyid != this_keyid
log BAD_NOCK
pass PFRES_BAD_ISN_KEY
Otherwise
If async == 1 then
pass PFRES_PRE_ISN_KEY
Otherwise
If this_keyid is first in a list of isn-keys with
keyseq == this_keyid sorted by descending order of seqno then
pass PFRES_PRE_ISN_KEY
Otherwise
DEL the entire sequence keyseq == this_keyseq && keyid != this_keyid
log BAD_NOCK
pass PFRES_BAD_ISN_KEY
DEL the key with keyid == this_keyid
On receipt of a valid SYN/ACK with a final matching ISN key, wait
for pfsync to DEL_ACK this before making the connection.
Other protocols (currently there are none): hold the first packet
until the pfsync DEL_ACK arrives.
This prevents a race with another firewall. For this to work,
the interface must have been set up for pfsync(4) deferral using
ifconfig(4), and the pfsync physical i/f must be congestion-free
so that deferrals are not timed out (at present, this means they
must be ACKed by pfsync within 20 ms. which is hard-coded.)
Implementation conditions:
(3a) The first step (*) of the match test requires the isn-key
table to be non-empty, so that if the feature is not enabled
by setting limit[PF_LIMIT_IKS] > 0 then the candidate key list
will be empty and no packet routing changes will be made.
(3b) It self-evident that this is absolutely necessary to implement
the required functionality.
(4) Test framework for running pfctl with arbitrary commands under
arbitrary real uids/gids (via sudo) to load rulesets and test
for errors.
Test framework actions to send TCP packets
Test framework events corresponding to LOG messages at level
DEBUG with an event identifier. TEST:EVENT:test9.X referring
to this step.
Test framework events corresponding to receipt of TCP packets
with certain matching SEQ and ACK fields, flags, src and
destination addresses:ports. These could be implemented using
a bpf(4) filter attached to the test machine tunnel i/f on the
test host.
10. Modify SYN_PROXY and MODULATE_STATE to preserve ISN for outgoing
isn-keyed connections pf.c lines 3547 and 3652 (We want the SYN flood
protection, but we need to be able to choose the ISN)
Always make changes to the existing routing code conditional on
both pf_desc.r->isn_key and pf_desc.isn_key being non-zero, so
that it is easy to show there are no changes to the routing of any
packet which is _not_ matched by some isn-key rule and some
particular key in the isn-key tree.
Implementation conditions:
(3a) The pf_desc.isn_key is only non-zero when a match with some
entry in the isn-key tree has occurred, and this can only
happen when the feature has been explicitly enabled.
(3b) These changes are absolutely necessary to implement the
feature because the SYN_PROXY code would otherwise change the
ISN of outgoing TCP SYN packets thus preventing the feature
from working for outgoing connections.
(4) As for step 9 above.
Test framework events corresponding to LOG messages at level
DEBUG with an event identifier. TEST:EVENT:test9.X referring
to this step.
11. Add pfsync structures and packets for isn keys
#define PFSYNC_ACT_INS_IK 16 /* insert isn key */
#define PFSYNC_ACT_DEL_IK 17 /* delete isn key */
#define PFSYNC_ACT_DEL_IK_ACK 18 /* delete isn key ACK */
#define PFSYNC_ACT_CLR_IK 19 /* clear all isn keys */
Add to if_pfsync.h line 285:
#define PFSYNC_S_IKDACK 0x06
// One hopes there is some administrative mechanism to reserve numbers
// in this space so that patches can be applied to consecutive OpenBSD
// releases without prejudicing the compatibility of patched pfsync(4)
// implementations in consecutive releases.
struct pfsync_isn_key {
u_int64_t keyid;
u_int64_t keyseq;
u_int32_t anchor;
u_int8_t seqno;
u_int32_t isn_key;
u_int32_t timeout;
u_int32_t keytype;
u_int8_t async;
struct pf_rule_addr src;
struct pf_rule_addr dst;
uid_t uid;
gid_t gid;
u_int8_t proto;
u_int32_t creation;
u_int32_t expire;
u_int32_t creatorid;
u_int8_t sync_flags;
};
struct pfsync_clr_ik {
char anchor[MAXPATHLEN];
u_int32_t creatorid;
} __packed;
struct pfsync_del_ik {
u_int64_t keyid;
u_int64_t keyseq;
u_int32_t creatorid;
} __packed;
struct pfsync_del_ik_ack {
u_int64_t id;
u_int32_t creatorid;
} __packed;
Implementation conditions:
(3a) These changes only have operational effects when code in steps
12. and 13. below uses them.
(3b) Ditto.
(4) Ditto
12. Add pfsync(4) glue fns in if_pfsync.c:
(*) The following should immediately test limit[PF_LIMIT_IKS] > 0
and log and return an error otherwise, eg:
log(LOG_ERR, "if_pfsync: pfsync_isn_key_xx: isn-key tree is empty.");
return (EINVAL);
pfsync_isn_key_import
pfsync_isn_key_export
pfsync_in_isn_key_clr
pfsync_in_isn_key_del
pfsync_in_isn_key_del_ack(caddr_t buf, int len, int count, int flags)
pfsync_in_isn_key_ins
pf_unlink_isn_key
pf_isn_key_copyin
Implementation conditions:
(3a) Satisified by the condition (*)
(3b) This is absolutely necessary if the feature is to operate in
fail-over configurations where routing is effected by more than
one pfsync peer. Without this facility dynamic routing
protocols such OSPF could not be used to route around VPN
points of ingress which were under DoS attacks, for example.
(4) Test framework events corresponding to LOG messages at level
DEBUG with an event identifier. TEST:EVENT:test12.X referring
to this step.
Test framework events corresponding to receipt of TCP packets
from pfsync(4) interfaces. These could be implemented using
a bpf(4) filter attached to the test machine tunnel i/f on the
test host.
13. Add sbin/tcpdump/print-pfsync.c functions:
pfsync_print_isn_key_ins
pfsync_print_isn_key_del
pfsync_print_isn_key_del_ack
pfsync_print_isn_key_clr
Add sbin/tcpdump/pf_print_isn_key.c
print_isn_key(struct pf_sync_isn_key *isn_key, int flags)
Implementation conditions:
(3a) These functions will only be called when pfsync packets with
isn-key specific subheaders are received, which is conditional on
the explicit enabling of the feature as ensured by the
relevant conditions on step 12. above.
(3b) These changes are absolutely necessary if the operation of the
pfsync features is to be observable by tcpdump(8).
(4) Test framework events corresponding to LOG messages at level
DEBUG with an event identifier. TEST:EVENT:test13.X referring
to this step.
Instrumenting tcpdump(8) with appropriate TEST:EVENT logging.
Test framework events corresponding to receipt of messages
from tcpdump(8)
Pages
▼
Saturday, 25 October 2014
Sunday, 19 October 2014
Security Engineering for Linux Users
This is one way die-hard Linux users can find out what the word "engineering" really means. They can learn about OpenBSD without rebooting either their machines, or their minds.
First read the man pages. OpenBSD man pages aren't documentation, they're literature, so you need to see them nicely formatted. Get the source from a mirror, e.g.
If you're doing this on a machine or user account you care about, then you will want to check the signatures, and you will want to try and find out what they should be. Obviously there's no point checking the signatures if you got them from the same place as the code!
Get an install ISO image from one of the mirrors, e.g.:
Then shut down the VM properly (using /sbin/halt) and make N copies of the openbsd.img file called openbsdn.img, where n is one of 1...N.
Now make a script startbsd with this in it:
First read the man pages. OpenBSD man pages aren't documentation, they're literature, so you need to see them nicely formatted. Get the source from a mirror, e.g.
mkdir ~/openbsd && cd ~/openbsd
wget http://mirrors.ucr.ac.cr/OpenBSD/5.5/src.tar.gz
wget http://mirrors.ucr.ac.cr/OpenBSD/5.5/sys.tar.gz
tar xzf src.tar.gz && tar xzf sys.tar.gzThen put this shell script in a place where it's runnable:
#! /bin/shNow when you want to see a page, type something like
MP=$HOME/openbsd
FP=$(find $MP/. -name $2.$1)
if test -n "$FP" -a -f $FP ; then
if test -f /tmp/$2.$1.pdf ; then
echo "Done!"
else
man -Tps $FP | ps2pdf - /tmp/$2.$1.pdf 2> /dev/null
fi
evince /tmp/$2.$1.pdf &
else
echo "error: file $2.$1 does not exist."
fi
bsdman 5 pf.confUse QEMU to run OpenBSD virtual machines. You can download QEMU source and build it with commads like:
wget http://wiki.qemu-project.org/download/qemu-2.1.2.tar.bz2
tar xjf qemu-2.1.2.tar.bz2 && cd qemu-2.1.2
./configure --enable-gtk --with-gtkabi=3.0 --prefix=$HOME/usr --extra-ldflags=-Wl,-R,$HOME/usr/lib --extra-cflags=-I$HOME/usr/include
make && make installThis assumes you have things like gtk-3.0 and glib-3.0 installed in ~/usr, and that this is where you want qemu installed too.
If you're doing this on a machine or user account you care about, then you will want to check the signatures, and you will want to try and find out what they should be. Obviously there's no point checking the signatures if you got them from the same place as the code!
Get an install ISO image from one of the mirrors, e.g.:
wget ftp://mirrors.ucr.ac.cr/OpenBSD/5.5/i386/install55.isoThe same point we made above about checking signatures applies here too, of course. Now make a disk image to install onto:
qemu-img create -f qcow2 openbsd.img 4GNow create some ifup scripts to start and stop the tunnel devices. The first is to handle the general case. Put this in /etc/qemu-ifup
#! /bin/shAnd the second is the one to take the i/f down, put it in /etc/qemu-ifdown:
addr=192.168.$2.1
mask=255.255.255.0
if test -z "$1" ; then
echo qemu-ifup: error: no interface given
exit 1
fi
ifconfig $1 inet $addr netmask $mask
#! /bin/shThen do special cases, I have three, change the final n to one of 1..N for N guest VMs, call them /etc/qemun-ifup where n is one of 1...N:
exit 0
#! /bin/shThen make them executable (assuming they're the only files in /etc that are called qemu*
/etc/qemu-ifup $1 n
chmod +x /etc/qemu*Now install a standard OpenBSD on the image:
$HOME/usr/bin/qemu-system-i386 -hda openbsd.img -boot d -m 128 -cdrom install55.iso -net tap,vlan=0,script=/etc/qemu1-ifup -net nicSet up the i/f em0 as 192.168.1.0/24 and give it IP address (fixed) 192.168.1.2
Then shut down the VM properly (using /sbin/halt) and make N copies of the openbsd.img file called openbsdn.img, where n is one of 1...N.
Now make a script startbsd with this in it:
#! /bin/shNow you should be able to launch N instances with
if test ! -p $HOME/.cua01.$1 ; then
mkfifo -m u=rw,go= $HOME/.cua01.$1
fi
sudo /bin/sh -c "echo 1 >/proc/sys/net/ipv4/ip_forward"
sudo $HOME/usr/bin/qemu-system-i386 \
-runas $USER -hda openbsd$1.img -boot c -m 128 -name guest$1 \
-net tap,vlan=0,script=/etc/qemu$1-ifup \
-net nic \
-chardev pipe,id=com1,path=$HOME/.cua01.$1 \
-device isa-serial,chardev=com1,irq=3,iobase=0x2f8 \
-daemonize
./startbsd nand customize them by setting the interfaces to be started with /etc/hostname.em0 containing
inet 192.168.n.2 255.255.255.0where again n is one of 1...N.
Friday, 17 October 2014
Men Talking Crap
Here's another good read. This is one of the most thoughtful of the pieces on this theme that have appeared in the past few weeks. It's Ann Friedman's:
And I thought "Wow! She's got it!" None of the men I'd taught at Cambridge ever seemed to think this was even worth mentioning. But for me it was the reason I liked to learn: to see how the ideas fit together. But what I always found was that in the end the ideas didn't match up properly. If you chase up the foundations of the theory of probability you either end up in an undisciplined mud-slinging of the "debate" between frequentists and Bayesians, or you follow the "pure" theory into analysis and end up having to learn about Lebesgue measure. So the intuitive sense women have of the ultimate unity of these ideas is belied by the extremely unintuitive way in which the theory has been developed into extremely abstruse corners where only a few macho alpha-male types dare to claim they understand it. And it's like this everywhere you look for the foundations of theory: where one would expect some sort of convergence, there is only divergence, and where one would expect clarity there is smoke.
And this is all the result of the male dominance of these fields. This is the ultimate reason, I think, why women don't do well in science and technology: it's all bull-shit made up by agressive little boys who are more concerned about appearing to be clever than they are about understanding anything.
And something similar happens with money. Those little men who aren't smart enough to talk convincing bull-shit want money because that's how they know they're really somebody. Of course, they end up talking bull-shit about money too!
So Ladies, don't worry about not being paid as much, or about not doing as well as little boys. None of them know what they're talking about, and furthermore, they're all wrong. And this is what we are about to see. These rich little pricks are going to wake up stone cold sober one day soon, and realize that (a) they're stone broke too, and (b) while they were out of it, they signed-up to a ninety-nine-year contract as, not even a bit-part, but an extra, in a 24/7 reality TV show called "The Grapes of Wrath."
Ian Fried Man
https://medium.com/matter/this-is-the-last-thing-youll-ever-need-to-read-about-sexism-in-tech-56b9a3a77af0I was privileged to have tutored some women undergraduate computer science students at Cambridge. At the end of the second year, one of them said to me "It's really good to see how everything fits together. You have this probability theory of Markov chains, with non-deterministic state machines, and that fits together with the quantum computing stuff on the one hand and with the regular languages and grammars on the other hand. Then the same ideas of eigenvectors and what-not come up in signal processing and graphics, and also in AI and inference, and then you have the fixpoints in the theory of computing ..."
And I thought "Wow! She's got it!" None of the men I'd taught at Cambridge ever seemed to think this was even worth mentioning. But for me it was the reason I liked to learn: to see how the ideas fit together. But what I always found was that in the end the ideas didn't match up properly. If you chase up the foundations of the theory of probability you either end up in an undisciplined mud-slinging of the "debate" between frequentists and Bayesians, or you follow the "pure" theory into analysis and end up having to learn about Lebesgue measure. So the intuitive sense women have of the ultimate unity of these ideas is belied by the extremely unintuitive way in which the theory has been developed into extremely abstruse corners where only a few macho alpha-male types dare to claim they understand it. And it's like this everywhere you look for the foundations of theory: where one would expect some sort of convergence, there is only divergence, and where one would expect clarity there is smoke.
And this is all the result of the male dominance of these fields. This is the ultimate reason, I think, why women don't do well in science and technology: it's all bull-shit made up by agressive little boys who are more concerned about appearing to be clever than they are about understanding anything.
And something similar happens with money. Those little men who aren't smart enough to talk convincing bull-shit want money because that's how they know they're really somebody. Of course, they end up talking bull-shit about money too!
So Ladies, don't worry about not being paid as much, or about not doing as well as little boys. None of them know what they're talking about, and furthermore, they're all wrong. And this is what we are about to see. These rich little pricks are going to wake up stone cold sober one day soon, and realize that (a) they're stone broke too, and (b) while they were out of it, they signed-up to a ninety-nine-year contract as, not even a bit-part, but an extra, in a 24/7 reality TV show called "The Grapes of Wrath."
Ian Fried Man
What they SHOULD be doing!
This is a nicely written story. It is readable by a non-tech person, and it also gives enough of the gist for a tech person to know what's behind it.
https://medium.com/matter/heres-why-public-wifi-is-a-public-health-hazard-dd5b8dcb55e6But this is what we should be doing anyway. All these laptops and tablets are full of useless files and info that they could be sharing. And machines could be routing traffic between WiFi 'cells' and cellular phone connections. All the accounting could be done using what Goethe (in Wilhelm Meister’s Apprenticeship) called "... one of the finest creations of the Human mind" which is double-entry book-keeping, of course.
Wednesday, 15 October 2014
Trustworthy Hardware
This is good news, it's dated 23 September:
http://www.nsf.gov/news/news_summ.jsp?cntn_id=132795&org=NSF&from=newsCoincidentally, this is something I sent out on 10 September, it went to Richard Stallman, Linus Torvalds, Theo deRaadt and Roger Schell, amongst others:
The Intel x86 CPU "Instruction Set Architecture" is not really what one ordinarily thinks of as machine code. It is more like a bytecode language. Processors such as the Intel Atom series do not directly execute ISA instructions, rather they emulate an x86 CPU, by running an interpreter which is programmed in "microcode." In the case of the Atom CPUs, the "microcode" possibly bears a striking resemblence to ARM machine code. We don't actually know this, because it is encrypted---some may claim for "obvious reasons," but we will show that these are neither obvious, nor are they reasons. The published ISA specification is highly redundant in that there are many instructions which can be encoded in more than one way, but which have identical final results (i.e. they have the same "big step" operational semantics). Because the redundant encodings have defined operational semantics, they provide a means by which any agency having the capacity to inject "microcode" into the CPU can affect a covert two-way channel between machine code programs and the CPU. For example, it is possible to arrange that application and/or system software can determine whether or not it is running under emulation, and thereby moderate its behaviour if there is any risk of it being observed. This could be done by "instruction knocking" which is another instance of the teletype string trigger trap door described by Karger and Schell in [1]: using a special, highly improbable string of variant encodings of otherwise normal, well-defined, instructions, to trigger operational effects not explicitly specified in the processor documentation. Unless a software emulator were programmed to recognise all such undocumented sequences, that emulator would behave in a way that was observably different to the way a real processor would be expected to behave. Having once identified that it is very probably running on a "real CPU", such a program could then safely use undocumented instructions to directly interface with covert functions implemented in the "microcode". This channel could be used to re-program the microcode, for example: to recognise a new sequence of instructions characteristic of a particular cryptographic checksum calculation, and to adjust the results when they matched any of the target checksum signatures stored in a table. Obviously it could also be used to effect a "microcode" update, bypassing the documented "microcode" update mechanism. A similar technique could be applied to any actual hardware, such as a USB mouse, for example. A software vendor who also has control of the hardware in a widely-used mouse could employ those devices as a 'trusted' platform to verify that it is not running under emulation, and that an out of band channel through the USB interface could safely be used to apply firmware updates, say, to other USB devices attached to the same hub. For example, it could update any attached USB DVD-player firmware with a blacklist of signatures of files, the contents of which would then be altered as they were read or written to/from the device. This would not be too damaging, if its use were restricted to enforcement of so-called "intellectual property rights". However the same mechanisms, through subversion or otherwise, could be used to interfere with software distributed on CD-ROM, or with data transmitted to/from electronic voting systems, or data transmitted/received by a cellular modem. Of course this problem is not unique to USB; the PCI bus interface offers similar "opportunities," with the potential for re-programming the firmware on network adapters, disk drives, storage area networks etc. Such devices typically have more potential for autonomous data-processing than do USB devices. The ability to do after-the-fact updates of device firmware is not a necessary pre-requisite for the establishment of covert functions in hardware, but it makes them very easy to retrofit to hardware in the field, and that hardware is therefore correspondingly more difficult to protect against subversion: if the manufacturer can install a "microcode" update then, in principle, so can anyone else. Even if the probability that any one device is compromised is relatively small, the number of such devices in even a modest network of workstations and servers makes it more likely than not that at least one has in fact been compromised. Furthermore, if one device on the network is compromised then the probability that others will be compromised as a result is far higher, and so on and so forth. There is therefore a strong case to be made for imposing a legal requirement on hardware manufacturers to fully disclose the internal software interface specifications of every digital data gathering, communications, computing or storage device with a firmware field-update capability. The lawful owners of these devices would then have control over which updates are applied, and what that software actually does. They could thereby more effectively secure those devices, because there would no longer exist a single point of failure which would enable a successful attacker to compromise any and every instance of a particular class of device. It is conceivable that such a motion would be delayed, or even successfully opposed by the manufacturers. In that case other techniques will be needed to secure integrity, privacy and availability of computer and communications systems with field-updatable firmware. One plausible approach is to restrict direct software access to device hardware, and pass all device i/o operations through a highly restrictive channel: a formally-specified interpreter implementing the required operations in such a way as to provide NO MEANS WHATSOEVER by which ANY software could detect whether physical hardware was in fact being used at all to implement the function of the device. In effect, one would run ALL software on an emulated processor, just like the Intel Atom does. Any such interpeter software would ultimately have to have direct hardware access, and that layer of software would have to be trusted. To make such trust plausible, the operational semantics of the interpreter (i.e. both the processor emulator and the peripheral harware emulator) would need to be formally specified, and that specification be machine-readable, so that implementations could be automatically generated in any sufficiently expressive programming language. This is not as difficult to do as it might at first seem. The open source Bochs emulator is a reasonably complete emulation of the publicly specified fragment of the operational semantics of a large class of x86 CPUs, and a modest collection of common PC hardware interfaces. Bochs is written in C++, but in fact the core instruction emulation code is remarkably uniform, and an abstract operational semantics could be extracted from it with only a little difficulty. Initially, this would be in the form of C expressions, but it could be further formalised if it were re-written in an abstract assembler language such as that implemented by GNU Lightning. Such an abstract expression of device semantics could be used to implement Bochs-like emulators in many different contexts, which the current concrete C++ implementation prohibits, because they would require major restructuring of the code. For example, one could generate a version of the emulator which allowed emulation of different machine states in different threads of the same process; or which embedded an emulator in an interpreted programming language, allowing programmed interaction with the emulation in progress; or which split the emulation of one virtual machine between co-processes running on different physical and/or virtual machines. The possibilities are endless. If that abstract assembler language were fairly carefully specified, then translations to machine code for particular processors could be partially verified by automatic means, wherever a machine-readable specification of the operational semantics of the target hardware was available. For example, Intel's (closed-source) XED library, distributed as part of their Pin tool, provides a partial specification of x86 semantics by way of the resource flags it provides with each instruction decoding. These specify the gross effects of the instruction in terms of the CPU flags upon which it depends, and those which it affects, the registers and memory locations which are read/written, etc. If the abstract assembler had a similar machine-readable formal semantics, then these could be compared to partially verify any translation between the abstract and concrete assembler languages. Given more than one such formal specification of different CPU emulators, one could arrange for implementations to be stacked: emulating the emulators. Then it is not too hard to see how one could stack another, different pair of emulators together on some other physical hardware, and compare the results of emulating the same program. The more available implementations of processor semantics there were, the more confidence would be justified in the correctness of those semantics. So although we cannot trust any hardware to directly interpret its own instructions, we could perhaps trust it to directly interpret its own instructions when they are emulating those of some other processor which is emulating that system. The interpretation stack need not be constant over the duration of the program's execution: I am told it is not too difficult to migrate running virtual machines from one physical machine to another, so it should be significantly easier to migrate a virtual machine image from one virtual machine to another, whether those virtual emulators are running on the same physical machine or not. An interpretive environment such as this could then be used whenever there was any doubt that the underlying hardware was trustworthy, i.e. until manufacturers are forced to publish the specifications of the internal firmware interfaces. Emulation need not be grossly inefficient: the existence of the Atom processors shows that emulating a CISC machine on a RISC machine is a workable proposition. There is no absolute requirement that the processors we emulate be real extant machines. In fact, it would be better if some of them weren't, because they would be less likely to be subverted. The key element is the existence of a _machine-readable formal specification_ of the operational semantics. The fact that the semantics is machine-readable means that it can be automatically implemented. It is important to realise that, although in one sense any C program at all is clearly machine readable, it is not necessarily a formal semantics, because it may not be amenable to automatic transformation into another implementation, unless that other implementation were effectively a C language interpreter of some kind. This is because of the well-known results of Rice and others, which show that the defined semantics of programs written in Turing-universal languages are not necessarily consistent: they are susceptible to syntactic fixedpoints, which are deliberately constructed contradictory interpretations of the operation of some particular implementation of the interpreter. But a processor emulator does not _have_ to be written in a Turing-universal language. Every part of its operation is necessarily finite, because it is a physically finite device with a finite state space. Therefore we can use a so-called domain-specific language to describe its operation, and that would be machine-readable in the sense we need it to be. We could then construct any number of particular implementations of interpreters (i.e. emulators of hardware and CPU) and although any finite subset of those _may_ be susceptible to the construction of a syntactic fixedpoint, the general class of such implementations as a whole will not, because they are indeterminate: a purely formal specification will not specify the representation of any particular implementation whatsoever. Putting it another way: any formal language which could specify the operational semantics of a broad class of modern CPUs and peripheral hardware would also be able to express many variations on existing hardware designs which had never before been realized. It is difficult to see how an attacker would able to subvert the semantics of a device which has not yet been invented, provided the inventor was imaginative and her invention independent of, and substantially different from, any of those already well-known. All we need to do to make this real, is to carefully specify a GNU Lightning-like abstract assembler, and formally describe the mappings from that language into the encoded instructions of real and imaginary machine languages. Anyone with any interest at all in computer security should look at [1]. Those interested in learning more about this method of specifying operational semantics as processes of interpretation should look at John Reynolds' paper [2]. Those interested in the logical point of view: the same idea but from the other side of the Curry-Howard-Griffin correspondence, as it were, should look at Girard, Lafont and Taylor's [3]. Thanks to Stefan Monnier whose clear and insightful criticisms inspired these thoughts. Ian Grant La Paz, Bolivia 10 September 2014 References: [1] Karger, P. A., and Schell, R. R. (1974) MULTICS Security Evaluation: Vulnerability Analysis, ESD-TR-74-193, Vol. II, Electronic Systems Division, Air Force Systems Command, Hanscom AFB, Bedford, MA, June. http://seclab.cs.ucdavis.edu/projects/history/papers/karg74.pdf
Also in Proceedings of the Computer Security Applications Conference, Las Vegas, NV, USA, December, pp 126-146. [2] Reynolds, John C., "Definitional Interpreters for Higher-Order Programming Languages," Higher-Order and Symbolic Computation, 11, 363--397 (1998) Available on-line, search for: HOSC-11-4-pp363-397.pdf [3] Girard, Jean-Yves, Lafont, Yves and Taylor, Paul. "Proofs and Types" Cambridge University Press, 1989. http://www.paultaylor.eu/stable/prot.pdf
Tuesday, 14 October 2014
The Navigator
Frank Herbert's Dune trilogy is one of the few sci-fi novels I've read and enjoyed. My favourite part of the elaborate culture Herbert constructs are the Guild Navigators. Human beings become mutant through consumption of vast quantities of a drug that turns their minds into devices which can warp space-time, and these creatures power space-craft at trans-luminal velocities.
Some Russian hackers apparently like these books too:
And how, tell me, did iSIGHT (μυωπία) know that it wasn't being used during that month? Do they open all the power-point and excel spreadsheets that the people at JPMorgan send each other?
Why do people trust a company that produces such garbage? Because they charge a lot of money for it?
Some Russian hackers apparently like these books too:
"In late August, while tracking the Sandworm Team, iSIGHT discovered a spear-phishing campaign targeting the Ukrainian government and at least one United States organization. spear-phishing attacks coincided with the NATO summit on Ukraine held in Wales.
On September 3rd, our research and labs teams discovered that the spear-phishing attacks relied on the exploitation of a zero-day vulnerability impacting all supported versions of Microsoft Windows (XP is not impacted) and Windows Server 2008 and 2012. A weaponized PowerPoint document was observed in these attacks.
The vulnerability exists because Windows allows the OLE packager (packager.dll) to download and execute INF files. In the case of the observed exploit, specifically when handling Microsoft PowerPoint files, the packagers allows a Package OLE object to reference arbitrary external files, such as INF files, from untrusted sources.Now this does not sound to me like a bug as one would ordinarily use the term. This is a consciously designed feature of the OLE packaging API, which is so obvious that it would have showed up in even a cursory design review. Exploiting it, once you know it's there, is probably trivial.
And how, tell me, did iSIGHT (μυωπία) know that it wasn't being used during that month? Do they open all the power-point and excel spreadsheets that the people at JPMorgan send each other?
Why do people trust a company that produces such garbage? Because they charge a lot of money for it?
Free Speech, my Ass.
It looks as though some people want exclusive access to lessons on mathematical modelling:
Here's one critic's first impression:
Here's one critic's first impression:
I found lots of interesting essays in the 143 page file that you sent, covering a wide range of topics from mathematics, logic, history and literature, many things that would be fun to talk about. What I am still looking for in there is where this is all heading, how all your topics are connected. What is the overarching question that you seek to answer, the thesis you try to proof, the underlying narrative, etc.? Discussions of the Cuban missile crisis next to random walk theory? Is this some kind of puzzle?
Monday, 13 October 2014
Guerilla Logic
This is the book Alice is writing
See Chapter 25 "Predicting the completely unpredictable", on p84, and Chapter 21 "The logic of a crisis" on p67. But also make sure you understand Chapter 19 "The curious events of 1812" and Chapter 20, "The curious events of 1962" In fact, the whole thing is altogether a lot of very curious events!
Enjoy,
Alice
P.S. It looks like those lovely ladies at the CIA are having to think about whether to let this one through or not, ... Ladies, remember, you don't need to repeat History lessons, because History repeats itself.
https://drive.google.com/file/d/0B9MgWvi9mywhSG1CdGFyQnVoRFdkUS1XMzJCY01sNUFyeFRZ/view?usp=sharingI must admit that when I read some parts of it the wandstrassewolf says to me "Did you really type that?! You crazy fuck!" But I think some chapters ought to interest those people who are trying to learn about the world from mathematical models of it, or who, following Kennedy, are trying to change reality by changing how it appears.
See Chapter 25 "Predicting the completely unpredictable", on p84, and Chapter 21 "The logic of a crisis" on p67. But also make sure you understand Chapter 19 "The curious events of 1812" and Chapter 20, "The curious events of 1962" In fact, the whole thing is altogether a lot of very curious events!
Enjoy,
Alice
P.S. It looks like those lovely ladies at the CIA are having to think about whether to let this one through or not, ... Ladies, remember, you don't need to repeat History lessons, because History repeats itself.
Sunday, 12 October 2014
"Security Theater"
This is a comment on Bruce Schneier's article "In Praise of Security Theater":
https://www.schneier.com/essays/archives/2007/01/in_praise_of_securit.html
As I'm also a great fan of "security theater" I got all excited when I saw the title, but sadly it turns out to be all about completely sensible things.
What Schneier is talking about is not security at all, it's just insurance. These are the sorts of questions a big insurer employs actuaries to calculate. And a smart security consultant will be able to do these calcs too, and figure, ... "Well if I tell people this and that about security, the probability of my losing the right to professional endemnity insurance when my insurers have to pay out more than MAX million dollars is ... and if this is a Poisson distribution, and I retire in 5 years ... OK." Martha, write this report for the West Hollywood Maternity Hospital: the baby RFID tag alarm system is not real security because blah, blah, copy that from the report I did for the Baltimore District Hospital Administration last year."
So, what I've learned from Bruce is what real security really is, and I don't doubt for one minute that he's right. As Aristotle wrote in one of ethics texts, I forget which, money becomes a measure of all things. But what a shame. It doesn't have to be like this, you know.
Real security is actual knowledge, not of risks, but of the probability of the effectiveness of the countermeasures that are in effect against the known risk. We don't need to know anything about the probabilities of the perceived risks becoming actual events, otherwise all security would cease to be real once the countermeasures to the risks were widely employed: because the risks would fall and the cost-effectiveness of the measures would go negative.
Now, if I can just figure out a way to make a shit-load of money as a security consultant .... maybe Bruce can help?
Ian
P.S. The wandstrassewolf just told me "No man, you make the same fuckin' mistake every time. You think these people are smart, don't you? Asshole! Fuck-em harder, man! They 'aint gonna think of this shit 'till you tell 'em about it."
OK, OK, take it easy, here's some nice Bolivian coke, ..., better now? Good.
So, what we need to think about, when we do cost-benefit analyses of security, is this:
Oh man! You don't get it do you? As far as these dumb fucks are concerned, you're the fuckin' threat, asshole! You gotta' tell'em it's the global economic collapse, and then you gotta tell'em what that costs, because they won't be able to figure that out for themselves, it's way too hard. I mean, these guys are runnin' their fuckin' economic models to show how their system handles four big banks going down on the same day.
But, Mr. Wolf, what big teeth you have!
https://www.schneier.com/essays/archives/2007/01/in_praise_of_securit.html
As I'm also a great fan of "security theater" I got all excited when I saw the title, but sadly it turns out to be all about completely sensible things.
"Security is both a reality and a feeling. The reality of security is mathematical, based on the probability of different risks and the effectiveness of different countermeasures. We know the infant abduction rates and how well the bracelets reduce those rates. We also know the cost of the bracelets, and can thus calculate whether they're a cost-effective security measure or not."If the reality of security is based on probability then it cannot be essentially mathematical, because probability is not essentially mathematical. Probability is essentially epistemological: it is a question of what we know. The fact that we use "objective" mathematics to calculate probabilities is not sufficient to make what we know real. Our knowledge is only real when we know how or why it is that we know what we think it is we know. Taking Schneier's example here: we need to know how we know those statistics about baby abductions from hospitals were arrived at before we can make a judgement as to the probability that some particular baby will be abducted.
"But security is also a feeling, based on individual psychological reactions to both the risks and the countermeasures. And the two things are different: You can be secure even though you don't feel secure, and you can feel secure even though you're not really secure."Then later he writes, of security theater:
"It's only a waste if you consider the reality of security exclusively. There are times when people feel less secure than they actually are. In those cases -- like with mothers and the threat of baby abduction -- a palliative countermeasure that primarily increases the feeling of security is just what the doctor ordered."Schneier is clearly taking probable cost-effectiveness of security measures as the measure of the "reality" of security, which is unsupportable, and he would have known this if he'd discussed it with any woman. The first thing she would have asked him would have been "And how do you measure the cost to the mothers whose babies are abducted, or their families or the abducted child?"
What Schneier is talking about is not security at all, it's just insurance. These are the sorts of questions a big insurer employs actuaries to calculate. And a smart security consultant will be able to do these calcs too, and figure, ... "Well if I tell people this and that about security, the probability of my losing the right to professional endemnity insurance when my insurers have to pay out more than MAX million dollars is ... and if this is a Poisson distribution, and I retire in 5 years ... OK." Martha, write this report for the West Hollywood Maternity Hospital: the baby RFID tag alarm system is not real security because blah, blah, copy that from the report I did for the Baltimore District Hospital Administration last year."
So, what I've learned from Bruce is what real security really is, and I don't doubt for one minute that he's right. As Aristotle wrote in one of ethics texts, I forget which, money becomes a measure of all things. But what a shame. It doesn't have to be like this, you know.
Real security is actual knowledge, not of risks, but of the probability of the effectiveness of the countermeasures that are in effect against the known risk. We don't need to know anything about the probabilities of the perceived risks becoming actual events, otherwise all security would cease to be real once the countermeasures to the risks were widely employed: because the risks would fall and the cost-effectiveness of the measures would go negative.
Now, if I can just figure out a way to make a shit-load of money as a security consultant .... maybe Bruce can help?
Ian
P.S. The wandstrassewolf just told me "No man, you make the same fuckin' mistake every time. You think these people are smart, don't you? Asshole! Fuck-em harder, man! They 'aint gonna think of this shit 'till you tell 'em about it."
OK, OK, take it easy, here's some nice Bolivian coke, ..., better now? Good.
So, what we need to think about, when we do cost-benefit analyses of security, is this:
- What event are we most afraid of?
- What is the cost to us if that event occurs?
- What is the most effective counter-measure to neutralize that threat?
- What is the cost of affecting this countermeasure?
- What is the probability that this counter-measure will fail?
Oh man! You don't get it do you? As far as these dumb fucks are concerned, you're the fuckin' threat, asshole! You gotta' tell'em it's the global economic collapse, and then you gotta tell'em what that costs, because they won't be able to figure that out for themselves, it's way too hard. I mean, these guys are runnin' their fuckin' economic models to show how their system handles four big banks going down on the same day.
But, Mr. Wolf, what big teeth you have!
Relational Semantics
I wrote this essay nearly five years ago:
https://drive.google.com/file/d/0B9MgWvi9mywhNHVPQWlUU0VaTnNwbEk0TlVBdlU2ZUZfajU0/view
It is about the role of semantics and interpretation in the measurement interactions of Quantum Mechanics. And I am quite sure I will be able to explain the application of the same ideas of semantics, information and representation in computation and communications in terms of quantum mechanics, so that another way to think about the security properties of the protocols I am proposing here:
http://marc.info/?l=openbsd-tech&m=141298399931901&w=2
is to explain them in terms of Quantum Cryptography. This will probably make it even less likely that OpenBSD developers will take me seriously.
The problem for some people seems to be that because they've never heard of these ideas before, the ideas must be wrong. So how can there ever be any new ideas? I suppose we have to just wait until the really clever people say the ideas are right before we hear about them. Who are the really clever people though? The people at Cambridge? A security expert at Cambridge who apparently has nothing to say about these ideas, just told me yesterday, though not in so many words of course, because the people at Cambridge are all so frightfully polite, you know, that I am psychotic because Herman Hesse wrote Steppenwolf.
So if we want someone really clever to say these ideas are right we will have to look elsewhere. Let's try Bruce.
https://drive.google.com/file/d/0B9MgWvi9mywhNHVPQWlUU0VaTnNwbEk0TlVBdlU2ZUZfajU0/view
It is about the role of semantics and interpretation in the measurement interactions of Quantum Mechanics. And I am quite sure I will be able to explain the application of the same ideas of semantics, information and representation in computation and communications in terms of quantum mechanics, so that another way to think about the security properties of the protocols I am proposing here:
http://marc.info/?l=openbsd-tech&m=141298399931901&w=2
is to explain them in terms of Quantum Cryptography. This will probably make it even less likely that OpenBSD developers will take me seriously.
The problem for some people seems to be that because they've never heard of these ideas before, the ideas must be wrong. So how can there ever be any new ideas? I suppose we have to just wait until the really clever people say the ideas are right before we hear about them. Who are the really clever people though? The people at Cambridge? A security expert at Cambridge who apparently has nothing to say about these ideas, just told me yesterday, though not in so many words of course, because the people at Cambridge are all so frightfully polite, you know, that I am psychotic because Herman Hesse wrote Steppenwolf.
So if we want someone really clever to say these ideas are right we will have to look elsewhere. Let's try Bruce.
Friday, 10 October 2014
The Foundation (Parts I,II & III)
This is the core of the idea of how we can realize The Foundation as a reason for all people to have faith, because they have confidence based on actual knowledge, in the future of Humanity.
https://drive.google.com/file/d/0B9MgWvi9mywhOWQtYXhsLWpaVkdoZUNOMmZ4UU50N0NjYmZv/view?usp=sharing
It's my daughter Helen's sixteenth birthday tomorrow. This is my present to her, which I promised in the document "Revolution" I wrote three years ago. For this project, she lost her father for five years. Please help me make it a reality, so that we can give her and billions of others, the gift of a really good higher education.
Love,
Ian
P.S. "The NSA refused to comment beyond a statement saying, “It should come as no surprise that NSA conducts targeted operations to counter increasingly agile adversaries.” The agency cited Presidential Policy Directive 28, which it claimed “requires signals intelligence policies and practices to take into account the globalization of trade, investment and information flows, and the commitment to an open, interoperable, and secure global Internet.” The NSA, the statement concluded, “values these principles and honors them in the performance of its mission.”
This was from: https://firstlook.org/theintercept/2014/10/10/core-secrets/
https://drive.google.com/file/d/0B9MgWvi9mywhOWQtYXhsLWpaVkdoZUNOMmZ4UU50N0NjYmZv/view?usp=sharing
It's my daughter Helen's sixteenth birthday tomorrow. This is my present to her, which I promised in the document "Revolution" I wrote three years ago. For this project, she lost her father for five years. Please help me make it a reality, so that we can give her and billions of others, the gift of a really good higher education.
Love,
Ian
P.S. "The NSA refused to comment beyond a statement saying, “It should come as no surprise that NSA conducts targeted operations to counter increasingly agile adversaries.” The agency cited Presidential Policy Directive 28, which it claimed “requires signals intelligence policies and practices to take into account the globalization of trade, investment and information flows, and the commitment to an open, interoperable, and secure global Internet.” The NSA, the statement concluded, “values these principles and honors them in the performance of its mission.”
This was from: https://firstlook.org/theintercept/2014/10/10/core-secrets/
Tuesday, 7 October 2014
How to Design Software
This is a follow-up from my How to Write Programs which was apparently utterly incomprehensible to 90% of "Free Software hackers," which, to anyone that does understand it, demonstrates its fundamental premiss.
What prompted me to start writing this was the blogger.com "system." It only seems to accept comments of 160 characters or something. Maybe there's a way to change that limit, but it should tell the "owner" of the blog what that is when it gives her the error. It didn't.
I consider a limit of 160 characters on comments anti-intellectual.
So I posted the long comments as an article (a new post, in the tech. blog jargon) and I wanted to refer to a particular paragraph of a previous post. I thought there might be an anchor for paragraphs, because this is the sort of thing that a lot of people would want to do. No there isn't.
I consider the inability to refer to particular paragraphs stupid.
So having insulted any and everyone who contributed so much as a line of css "code" to this "system," I will no go on to explain to everyone else how this sort of thing should be done.
Rather than writing shed-loads of perl/php/whatever-stupid-concrete-lock-in-hell-is-this-week's-fashion script and javascript, Define an abstract language for describing web content. And distribute a machine-readable abstract syntax for it in the form of a context-free grammar.And then define a general-purpose bottom-up term-rewriting system language, and define abstract syntax for it, in some sort of abstract assembler, or another special-purpose language of you like.
By the way, I am also bored that I keep saying the same thing over and over again ...
So you let people define their own translations of the content using this language. And you can even let them define database record layouts and stuff like that. And it can all be done securely because everything is defined by machine-readable formal syntax and semantics. And you can let people share these semantic definitions of abstract content, and so on and so forth.
Think of it like customized styles gone extreme.
Ian
What prompted me to start writing this was the blogger.com "system." It only seems to accept comments of 160 characters or something. Maybe there's a way to change that limit, but it should tell the "owner" of the blog what that is when it gives her the error. It didn't.
I consider a limit of 160 characters on comments anti-intellectual.
So I posted the long comments as an article (a new post, in the tech. blog jargon) and I wanted to refer to a particular paragraph of a previous post. I thought there might be an anchor for paragraphs, because this is the sort of thing that a lot of people would want to do. No there isn't.
I consider the inability to refer to particular paragraphs stupid.
So having insulted any and everyone who contributed so much as a line of css "code" to this "system," I will no go on to explain to everyone else how this sort of thing should be done.
Rather than writing shed-loads of perl/php/whatever-stupid-concrete-lock-in-hell-is-this-week's-fashion script and javascript, Define an abstract language for describing web content. And distribute a machine-readable abstract syntax for it in the form of a context-free grammar.And then define a general-purpose bottom-up term-rewriting system language, and define abstract syntax for it, in some sort of abstract assembler, or another special-purpose language of you like.
By the way, I am also bored that I keep saying the same thing over and over again ...
So you let people define their own translations of the content using this language. And you can even let them define database record layouts and stuff like that. And it can all be done securely because everything is defined by machine-readable formal syntax and semantics. And you can let people share these semantic definitions of abstract content, and so on and so forth.
Think of it like customized styles gone extreme.
Ian
Panicz wants kisses!
This is Panicz, responding to my comments to his comments on this response of mine to his email.
Is it then that you suggest that it would be better to abandon the idea of maintaining education systems, because people would as well do without them, or that the education systems that you've had experience with need to be fixed? (or did I misunderstand you completely?)
I do agree, though, that calling metrology a skeptical (or critical) science would be absurd. Therefore, clearly, this adjective did not apply to metrology itself, but to your interpretation of metrology, that is, the application of some principles of metrology to epistemology.
The reason for me aping you is actually not to prove that you don't understand what you're saying (although if one asserts the validity of your argument, then he or she would necessarily conclude the validity of mine, unless willing to accept inconsistency), but to note that -- although you seem to be very intelligent -- your communication skills need an improvement badly.
It would be OK if you wrote "I disagree with your claim because xxx, although I'm eager to hear your justification, because maybe I didn't understand you properly," instead of offending me right away. The one thing that I know from dealing with language processing, is that a natural language is one ambiguous bitch. (In addition, I have to admit that I'm not a native English speaker, and so I often find it difficult to choose the right words for my thoughts)
greater than mine, but imagining that is beyond those humble capabilities.
My narrow mindset allows me only to imagine that your mental capabilities
are similar to mine (because that's the highest level of comprehension
that I can comprehend), and if it is so, then I need to assert that
the statement that you uttered is beyond your cognitive competence.
(I do agree with it, though)
(Ian): These 'metrological ideals' are simply facts that anyone couldI clearly must have misunderstood you then. I thought that you criticized the education system for not conveying the, as you call it, "fundamental principle of epistemology". Now it seems to me that you are claiming, that no educational system is needed, because anyone can make an effort to think for himself and reach that conclusion alone.
know who takes the trouble to actually think about them
Is it then that you suggest that it would be better to abandon the idea of maintaining education systems, because people would as well do without them, or that the education systems that you've had experience with need to be fixed? (or did I misunderstand you completely?)
(Ian): Skepticism is the belief that knowledge is impossible, 'metrology' is quite literally "knowledge of measure" so metrology is the rational basis of all quantitative scientific knowledge. The idea that metrology is skeptical is absurd.The claim that "skepticism is the belief that knowledge is impossible" is as naive as the claim that the word "relativism" can be interpreted in only one way. Although the skeptics did argue that knowledge is impossible, is not the subject of their arguments that is important (because such a claim is simply insane), but their arguments themselves. According to what you call WikipediA, the greek word "skeptikos" means "doubtful". The argument of Descartes is by all means skeptical, yet he does not claim that knowledge is impossible.
I do agree, though, that calling metrology a skeptical (or critical) science would be absurd. Therefore, clearly, this adjective did not apply to metrology itself, but to your interpretation of metrology, that is, the application of some principles of metrology to epistemology.
(Ian): I don't regard this as a problem at all, because it simply not true that "civilization is the struggle to make our lives effortless" The fact that you make such a ridiculous claim without justification is yet more evidence that you clearly do not understand that fundamental principle of epistemology which I enunciated.I percieve your claim, that my claim is ridiculous, as ridiculous (because my claim seems rather obvious to me), yet you didn't bother to justify it either, which clearly proves that you also don't understand your fundamental principle of epistemology.
The reason for me aping you is actually not to prove that you don't understand what you're saying (although if one asserts the validity of your argument, then he or she would necessarily conclude the validity of mine, unless willing to accept inconsistency), but to note that -- although you seem to be very intelligent -- your communication skills need an improvement badly.
It would be OK if you wrote "I disagree with your claim because xxx, although I'm eager to hear your justification, because maybe I didn't understand you properly," instead of offending me right away. The one thing that I know from dealing with language processing, is that a natural language is one ambiguous bitch. (In addition, I have to admit that I'm not a native English speaker, and so I often find it difficult to choose the right words for my thoughts)
(Ian): The rest of your missive is merely a feeble appeal to blind faith that there is good in the world. I don't need blind faith, because I have actual knowledge that this is in fact the case.That's good (although untrue).
(Ian): That good is God, who is the actuality of thought, and thisInteresting. Perhaps your insight and cognitive capabilities are much
is something that is quite clearly outside your experience.
greater than mine, but imagining that is beyond those humble capabilities.
My narrow mindset allows me only to imagine that your mental capabilities
are similar to mine (because that's the highest level of comprehension
that I can comprehend), and if it is so, then I need to assert that
the statement that you uttered is beyond your cognitive competence.
(I do agree with it, though)
Saturday, 4 October 2014
The Foundation (Part II)
This is a reformatted version of Dijkstra's "The Mathematics Behind the Banker's Algorithm" with half a page of comments regarding some interesting possibilities for optimization.
https://drive.google.com/file/d/0B9MgWvi9mywhTno5Y0VLakZDNGpTYW5jUHI3M0xLenZsdlhF/view?usp=sharing
https://drive.google.com/file/d/0B9MgWvi9mywhTno5Y0VLakZDNGpTYW5jUHI3M0xLenZsdlhF/view?usp=sharing
Thursday, 2 October 2014
The Foundation (Part I)
I wrote this to try once again to explain what is the nature of the problem that one would have in verifying the integrity of any software toolchain, whether it is aimed ultimately at the production of other software, or of hardware.
https://github.com/IanANGrant/red-october/blob/master/judgemnt.pdf
This three page text is ostensively about verifying the integrity of a communications link operating over an un-trusted transport layer, but a compiler is really a type of communications channel.
I am sure everyone still reading this has wondered about the possibilities of using universal programming languages (universal in the Church-Turing sense) as communications protocols. For example, one could establish a point-to-point connection by writing a program which, when run, output two more programs: one which, when run, outputs a decoding pad for the next mesage one would transmit over that channel, and the other the decoder which prints the message text together with another program, which was the encoder for returning an
acknowledgement. Both endpoints would do this, and so the programs/messages would be exchanged, with each one encoding the text of the other. Then these programs could include decisions not only about the encoding of data, choice of one-time pads, etc. but perhaps also the routes of messages, sending different parts via different trusted routes over similar channels etc. etc. The variations are endless, and limited only by the ingenuity of the programmers communicating over those channels.
And really, I sorely pity anyone charged with organising any kind of surveillance of a group of people who enjoy that sort of game. Cracking "the code" would be practically impossible, because there need never be any fixed concrete representation whatsoever of the fundamental encoding as it is transmitted over the transport medium: all of the knowledge about the current and future encodings can be sent over the previous "incarnations" of that and/or another
channel, and the encoding of channels thereby made non-deterministic:
this means that there could never be in principle, any mechanical
process whatsoever which could decode more than a few parts of any
of those messages. After this brief success, the poor would-be spy
would be right back at square one.
What I try to explain here is the essential distinction between what I call actual knowledge, as opposed to mere represented knowledge, such as a password, or an SSL certificate, or the documentation for some file format appearing on a web page. The distinction is that only in the case of actual knowledge does one know how and why one knows.
The motivation is the idea that by using actual rather than represented knowledge, it is possible to construct such a trustworthy system in practice. But there's a catch! The catch is that this will only work for an organisation whose motives and governance are completely open and transparent. This is because the technique relies upon mutual trust, which is something that cannot exist without openness and transparency. Bad guys just won't get it! To understand why (in case it is not immediately obvious to you, that is) you will need to read (or at least think carefully about) about how error-detection would work in such a system.
The text consists of a title page with the abstract, and two full pages. So it should be an easy read. I earlier sent out a nine page document entitled GNU Thunder, in which I attempted to describe what I consider to be essentially the same idea, but with the emphasis on compilers and interpreters, rather than communications. The Thunder text was a concrete suggestion for an implementation. This text however is more abstract. But these two documents could be considered to be complementary in two different senses.
I hope everyone enjoys this, and that it stimulates some interesting thoughts, and subsequent discussion, and that those thoughts are all good, and the discussion open and transparent. That way we could save ourselves an awful lot of really hairy metaprogramming!
Please feel free to copy either text and/or this message and pass it around. Neither of these two texts are copyright, and the more people that see them the better. Bad guys in particular need to know about this much more than the good ones do.