A famous 19 year old vulnerability known as Bleichenbacher’s Oracle attack has been rediscovered in RSA encryption system to give man-in-middle access to encrypted messages. The ROBOT attack’s rediscovery was analysed and researched by Hanno Böck, Juraj Somorovsky of Ruhr-Universitat Bochum/Hackmanit GmbH, and Craig Young of Tripwire VERT. They have given detailed explanations of this
Senator Sam Dastyari found himself amidst trouble when his Twitter account got hacked. Following Sam Dastyari Twitter hacking, he has blamed a hacker for an unusual like on his Twitter profile found early today. This like was on a Chinese propaganda post. This incident occurred following Friday’s authorization of gay marriage. At about 2 AM,
Individual information having a place with more than 31 million clients of a prominent virtual keyboard console application has been leaked on the web, after the application’s engineer neglected to secure the database’s server. This news of a virtual keyboard app hack sprung a severe issue globally for millions of users. The server is claimed
Believed by many as the youngest Uganda’s star, Sheilah Gashumba has once again proved her position. News that Sheilah Gashumba earned 17 million when she did not even reach 20 years of age has shocked the world. Radiant paid her to advertise their products in 2015 in which she was highly successful. Not only is
It’s a sad commentary on the state of the world when it becomes a good practice to closely inspect the card reader on every ATM and gas pump for the presence of a skimmer. The trouble is, even physically yanking on the reader may not be enough, as more sophisticated skimmers now reside safely inside the device, sipping on the serial comms output of the reader and caching it for later pickup via Bluetooth. Devilishly clever stuff.
Luckily, there’s an app to detect these devices, and the prudent consumer might take solace when a quick scan of the area reveals no skimmers in operation. But is that enough? After all, how do you know the smartphone app is working? This skimmer scammer scanner — or is that a skimmer scanner scammer? — should help you prove you’re being as safe as possible.
The basic problem that [Ben Kolin] is trying to solve here is: how do you prove a negative? In other words, one could easily write an app with a hard-coded “This Area Certified Zebra-Free” message and market it as a “Zebra Detector,” and 99.999% of the time, it’ll give you the right results. [Ben]’s build provides the zebra, as it were, by posing as an active skimmer to convince the scanner app that a malicious Bluetooth site is nearby. It’s a quick and dirty build with a Nano and a Bluetooth module and a half-dozen lines of code. But it does the trick.
In the first project, [Dan] had to figure out how to talk to the printer since the RS422 cable it came with didn’t seem to work. He bought a TTL-to-RS485 adapter, but then realized he could use TTL directly and wired up a ESP32/OLED dev board to it. During the course of turning it into a photo booth, he had to switch to a bigger screen with a better refresh rate.
Unfortunately, [Dan] was unable to use Haskell by itself. He blames this on the cobwebs in the Haskell ecosystem, something that isn’t a problem for languages like Python that celebrate wide usage and support. [Dan] wrote a Python script that handles image capturing, display, and listening for touch activity on the screen, but Haskell ultimately controls the printer. Check out [Dan]’s demo after the break.
This project may have been trying at times, but at least [Dan] didn’t have to give it a brain transplant to get it to do what he wanted.
How would you sell a computer to a potential buyer? Fast? Reliable? Great graphics and sound? In 1956, you might point out that it was somewhat smaller than a desk. After all, in those days what people thought of as computers were giant behemoths. Thanks to modern FPGAs, you can now have a replica of a 1956 computer — the LGP-30 — that is significantly smaller than a desk. The LittleGP-30 is the brainchild of [Jürgen Müller].
The original also weighed about 740 pounds, or a shade under 336 Kg, so the FPGA version wins on mass, as well. The LGP-30 owed its relative svelte footprint to the fact that it only used 113 tubes and of those, only 24 tubes were in the CPU. This was possible, because, like many early computers, the CPU worked on one bit at a time. While a modern computer will add a word all at once, this computer — even the FPGA version — add each operand one bit at a time.
The LGP-30 had a Friden Flexowriter (a TeleType-like machine made by a company eventually bought by Singer, the sewing machine company) and a magnetic drum with 4096 32-bit words. To keep the component count down, the drum stored the program, the CPU registers, and even the 120 kHz system clock. There were also 1,450 solid-state diodes, which helped. To avoid building a lot of blinking lights, the front panel had an oscilloscope that displayed three registers. There were about 500 units sold for about $47,000.
The FPGA version — mercifully — is less expensive. It uses a Xilinx Spartan 6 development board and a custom PCB that even duplicates the oscilloscope on an LCD. You might notice some strange characters on the oscilloscope. Even though the computer used hexadecimal (which was unusual in those days), it did not use A-F for the extra digits. Instead, it used characters that were easier for the limited hardware to decode: f, g, j, k, q, and w. So 255 in LGP-30-speak is ww not FF.
Although the FPGA version is faithful, inexpensive, and small, it isn’t the first solid-state version of the architecture. Librascope — the company behind the LGP-30 rolled out the LGP-21 in 1963 which had less than 500 transistors and 300 diodes. It wasn’t as fast as the LGP-30, though, and cost a measly $16,200. Then again, the FPGA board costs less than $40 although the front panel and case will move that price up, it is still going to come in well under that price. If you want a peek inside the real machine, check out the video below.
Any time we see bit-serial CPUs, it reminds us of EDSAC. One thing that was interesting to us was that a 113 tube machine would have been within reach of the day’s hackers if they’d had the plans. In 1967, for example, people did build the Wireless World Computer with around 400 transistors.
Modeling machines off of biological patterns is the dry definition of biomimicry. For most people, this means the structure of robots and how they move, but Christine Sunu makes the argument that we should be thinking a lot more about how biomimicry has the power to make us feel something. Her talk at the 2017 Hackaday Superconference looks at what makes robots more than cold metal automatons. There is great power in designing to complement natural emotional reactions in humans — to make machines that feel alive.
We live in a world that is being filled with robots and increasingly these are breaking out of the confines of industrial automation to take a place side by side with humans. The key to making this work is to make robots that are recognizable as machines, yet intuitively accepted as being lifelike. It’s the buy-in that these robots are more than appliances, and Christine has boiled down the keys to unlocking these emotional reactions.
She suggests starting with the “inside” of the design. This is where the psychological triggers begin. Does the creature have needs, does it have a purpose? Humans are used to recognizing other living things and they all have internal forces that drive them. Including these in your designs is the foundation for lifelike behavior.
The outside design must match this, and this is where Christine has advice for avoiding the Uncanny Valley — an emotional reaction to machines that look almost too real but some cue breaks the spell. She suggests using combinations of critters as the basis for the design so as not to be locked into strong associations with the living things used as the model. The motion of the robot should be carefully designed to use acceleration that makes sense with the biological aspects of the robot and the task it’s performing. Think about how jerky, unnatural motion is used in horror movies to elicit fright — something you don’t want to recreate in a robot companion.
Her last parameter on successful biomimicry design is “dissonance”. This is perhaps the most interesting part. Humans will have expectations for living things, and expectations for machines. Trying to completely hide that machine side is a mistake. Christine uses the new Sony Aibo “pet” robot as an example. It behaves like a lovable dog without the unpleasant parts of pet ownership like house training and being around to feed it. The thing that Sony is likely missing is doing amazing “robot things” with the new robot pet. As Christine puts it, they kind of stopped being creative once they implemented the “low tech meat dog” behaviors.
Don’t miss Christine Sunu’s full Supercon talk embedded below. She has also published her talk slides and you can learn more about what she’s working on by checking out her website.
In the early 20th century, Guinness breweries in Dublin had a policy of hiring the best graduates from Oxford and Cambridge to improve their industrial processes. At the time, it was considered a trade secret that they were using statistical methods to improve their process and product.
One problem they were having was that the z-test (a commonly used test at the time) required large sample sizes, and sufficient data was often unavailable. By studying the properties of small sample sizes, William Sealy Gosset developed a statistical test that required fewer samples to produce a reasonable result. As the story goes though, chemists at Guinness were forbidden from publishing their findings.
So he did what many of us would do: realizing the finding was important to disseminate, he adopted a pseudonym (‘Student’) and published it. Even though we now know who developed the test, it’s still called “Student’s t-test” and it remains widely used across scientific disciplines.
It’s a cute little story of math, anonymity, and beer… but what can we do with it? As it turns out, it’s something we could probably all be using more often, given the number of Internet-connected sensors we’ve been playing with. Today our goal is to cover hypothesis testing and the basic z-test, as these are fundamental to understanding how the t-test works. We’ll return to the t-test soon — with real data.
I recently purchased two of the popular DHT11 temperature-humidity sensors. The datasheet (PDF warning) says that they are accurate to +/- 2 degrees C and 5% relative humidity within a certain range. That’s fine and good, but does that mean the two specific sensors I’ve purchased will produce significantly different results under the same conditions? Different enough to affect how I would use them? Before we discuss how to quantify that, we’ll have to go over some basic statistical theory. If you’ve never studied statistics before, it can be less than intuitive, so we’ll go over a more basic test before getting into the details of Student’s t-test.
It’s worth starting by mentioning that there are two major schools of statistics – Bayesian and Frequentist (and there’s a bit of a holy war between them). A detailed discussion of each does not belong here, although if you want to know more this article provides a reasonable summary. Or if you prefer a comic, this one should do. What’s important to remember is that while our test will rely upon the frequentist interpretation of statistics, there are other correct ways of approaching the problem.
For our example, imagine for a moment you are working quality control in a factory that makes 100 Ω resistors. The machinery is never perfect, so while the average value of the resistors produced is 100 Ω, individual resistors have slightly different values. A measure of the spread of the individual values around the 100 Ω average is the standard deviation (σ). If your machine is working correctly, you would probably also notice that there are fewer resistors with very high deviations from 100 Ω, and more resistors closer to 100 Ω. If you were to graph the number of resistors produced of each value, you would probably get something that looks like this:
This is a bell curve, also called a normal or Gaussian distribution, which you have probably seen before. If you were very astute, you might also notice that 95% of your resistor values are within two standard deviations of our average value of 100 Ω. If you were particularly determined, you could even make a table for later reference defining what proportion of resistors would be produced within different standard deviations from the mean. Luckily for us, such tables already exist for normally distributed data, and are used for the most basic of hypothesis tests: the z-test.
Let’s say you then bought a machine that produces 100 Ω resistors — you quit your job in QC and have your own factory now. The vendor seemed a bit shady though, and you suspect the machine might actually be defective and produce resistors centered on a slightly different value. To work this out, there are four steps: develop a set of hypotheses, sample data, check if the sampled data meets the assumptions of your test, then run the test.
There are only two possibilities in our case: the machine either produces resistors that are significantly different from 100 Ω, or it doesn’t. More formally you have the following hypotheses:
H0: The machine does not produce resistors that are significantly different from 100 Ω
HA: The machine produces resistors that are significantly different from 100 Ω
H0 is called our null hypothesis. In classical statistics, it’s the baseline, or the hypothesis to which you’d like to give the benefit of the doubt. Here, it’s the hypothesis that we don’t find a difference between the two machines. We don’t want to go complaining to the manufacturer unless we have clear evidence that the machine isn’t making good resistors.
What we will do is use a z-score table to determine the probability that some sample we take is consistent with H0. If the probability is too low, we will decide that H0 is unlikely to be true. Since the only alternative hypothesis is HA, we then decide to accept HA as true.
As part of developing your hypotheses, you will need to decide how certain you want to be of your result. A common value is 95% certainty (also written as α=0.05), but higher or lower certainty is perfectly valid. Since in our situation we’re accusing someone of selling us shoddy goods, let’s try to be quite certain first and be 99% sure (α=0.01). You should decide this in advance and stick to it – although no one can really check that you did. You’d only be lying to yourself though, it’s up to your readers to decide whether your result is strong enough to be convincing.
Sampling and Checking Assumptions
Next you take a random sample of your data. Lets say you measure the resistance of 400 resistors with your very accurate multimeter, and find that the average resistance is 100.5 Ω, with a standard deviation of 1 Ω.
The first step is to check if your data is approximately shaped like a bell curve. Unless you’ve purchased a statistical software package, the easiest way I’ve found to do this is using the scipy stats package in Python:
import scipy.stats as stats
result = stats.normaltest(list_containing_data)
As a very general rule, if the result (output as the ‘pvalue’) is more than 0.05, you’re fine to continue. Otherwise, you’ll need to either choose a test that doesn’t assume a particular data distribution or apply a transformation to your data — we’ll discuss both in a few days. As a side note, testing for normality is sometimes ignored when required, and the results published anyway. So if your friend forgot to do this, be nice and help them out – no one wants this pointed out for the first time publicly (e.g. a thesis defense or after a paper is published).
Performing the Test
Now that the hard part is over, we can do the rest by hand. To run the test, we determine how many standard errors away from 100 Ω the our sample average is. The standard error is the standard deviation divided by the square root of the sample size. This is why bigger sample sizes let you be more certain of your results – everything else being equal, as sample size increases your standard error decreases. In our case the standard error is 0.05 Ω.
Next we calculate the test statistic, z. This is the difference between the sample mean of 100.5 Ω and the value we’re testing against of 100 Ω, divided by the standard error. That gives us a z value of 10, which is rather large as z-statistic tables typically only go up to 3.49. This means the probability (p) that our null hypothesis is correct is less than 0.001 (or less than 0.1% if you prefer). We would normally report this as p < 0.001, as no one really cares what the precise value of p is when it’s that small.
What Does it Mean?
Since our calculated p is lower than our threshold α value of 0.01 we reject the null hypothesis that the average value of resistors produced by the machine is 100 Ω… there’s definitely an offset, but do we call our vendor?
In real life, statistical significance is only part of the equation. The rest is effect size. So yes, our machine is significantly off specification… but with a standard deviation of 1 Ω, it wasn’t supposed to be good enough to produce 1% tolerance resistors anyway. Even though we’ve shown that the true average value is higher than 100 Ω, it’s still close enough that the resistors could easily be sold as 5% tolerance. So while the result is significant, the (fictional) economic reality is that it probably isn’t relevant.
This is all well and good for our fictional example, but in real life data tends to be expensive and time-consuming to collect. As hackers, we often have very limited resources. This is an important limitation to the z-test we’ve covered today, which requires a relatively large sample size. While Internet-connected sensors and data logging are inexpensive these days, a test that puts more knowledge within the reach of our budget would be great.
We’ll return in a short while to cover exactly how you can achieve that using a t-test, with examples in Python using a real data set from IoT sensors.
We sometimes forget that the things we think of as trivial today were yesterday’s feats of extreme engineering. Consider the humble pocket calculator, these days so cheap and easy to construct that they’re essentially disposable. But building a simple “four-banger” calculator in 1962 was anything but a simple task, and it’s worth looking at what one of the giants upon whose shoulders we stand today accomplished with practically nothing.
If there’s anything that [Cliff Stoll]’s enthusiasm can’t make interesting, we don’t know what it would be, and he certainly does the job with this teardown and analysis of a vintage electronic calculator. You’ll remember [Cliff] from his book The Cuckoo’s Egg, documenting his mid-80s computer sleuthing that exposed a gang of black-hat hackers working for the KGB. [Cliff] came upon a pair of Friden EC-132 electronic calculators, and with the help of [Bob Ragen], the engineer who designed them in 1962, got one working. With a rack of PC boards, cleverly hinged to save space and stuffed with germanium transistors, a CRT display, and an acoustic delay-line memory, the calculators look ridiculous by today’s standards. But when you take a moment to ponder just how much work went into such a thing, it really makes you wonder how the old timers ever brought a product to market.
As a side note, it’s great to see the [Cliff] is still so energetic after all these years. Watching him jump about with such excitement and passion really gets us charged up.
Thanks to [Mark] and [Jerrad] for the near-simultaneous tips on this one.