Planet Primates

October 31, 2014


Clojure REST API Frameworks

I'm a Python developer taking the plunge into Clojure and decided that taking on a real project is they best way to get started (after learning syntax of course). I decided to build a REST API for a project that I'm working on. Things that are worth knowing:

  • The database already exists.
  • The schema is already defined, implemented, and filled with data.
  • The web portion of the app is written in Django, which communicates directly with the DB.
  • The database is MySQL.

So my question is, what is the best framework for creating REST API in Clojure in late 2014? I've so far come across Caribou and Liberator, but don't know how to evaluate which is better (being a Clojure noob).

by Jack Slingerland at October 31, 2014 02:03 PM


Der Ziercke-Nachfolger steht fest, und mir zumindest ...

Der Ziercke-Nachfolger steht fest, und mir zumindest sagt der Name nichts. Das würde ich ja erstmal für eine gute Sache halten. Der sieht auch vergleichsweise jung und dynamisch aus, wie ein Tatort-Kommissar. Da bin ich ja mal gespannt, wie der sich bewährend wird.

October 31, 2014 02:01 PM

Portland Pattern Repository


How print akka configuration at startup?

I have a akka project with several config files. Is it possible to print akka merged configuration event when application starts (or stops) with errors?

by Cherry at October 31, 2014 01:58 PM

Scala partition into more than two lists

I have a list in Scala that I'm trying to partition into multiple lists based on a predicate that involves multiple elements of the list. For example, if I have

a: List[String] = List("a", "ab", "b", "abc", "c")

I want to get b: List[List[String]] which is a List of List[String] such that the sum of the lengths of the inner List[String] == 3. i.e List(List("a", "b", "c"), List("abc"), List("ab", "a"), ...)

[Edit] Needs to take a reasonable time for lists of length 50 or less.

by ajnatural at October 31, 2014 01:55 PM


Recursive methods with stacks

I'm doing some practice papers for revision for my finals and I came across this question:

"This question is about recursion. A recursive method can always be implemented by an iterative method that uses a stack to keep track of intermediate values. Could the stack be replaced by, for example, a queue? Explain (no explanation means no marks)."

And their answer is: "No, a stack is needed to reuse the results of the recursive calls, which means we need a LIFO (i.e., reversed) order. A stack is perfect since each call can be interpreted as a push and each return as a pop of the result."

Now this is sort of confusing me. Couldn't you also say that a queue is also okay because each call can be interpreted as an append and each return can be interpreted as a serve?

by Shotokan at October 31, 2014 01:54 PM


Little o Notation

I understand the little-oh notation a bit, but there is still some confusion. By definition, I get that f ∈ o(g) means that |f(x)/g(x)| approaches 0 as x approaches infinity. I also read somewhere that if f ∈ o(g), then

For every choice of a constant k > 0, you can find a constant a such that the inequality f(x) < k g(x) holds for all x > a.

My question is this: if f ∈ o(g), does this mean that for every value of a, one can find a constant k such that f(x) < k*g(x) for all x > a ?

by Ojas at October 31, 2014 01:52 PM


Is there a good way to render animation with Clojure for video production?

I'm working on an animation for a bit of motion graphics work. It's mathematical enough that I'd rather build it in a programming language than in a traditional motion graphics app.

Is there a good workflow for building an animation in Clojure/Java and rendering it to video suitable for a production pipeline?

My first instinct was to use ClojureScript and SVG, but I have no idea how to render that to anything useful.

submitted by peeja
[link] [comment]

October 31, 2014 01:51 PM


How is a transducer different from a partially applied function?

After reading this article on Clojure ( introducing transducers, I'm confused on what a transducer is. Is a partially applied map in Haskell, such as map (+1) a transducer? At first I thought this was a Clojure way of using partial application, but then the article goes on to implement it in Haskell with an explicit type. What use does it have in Haskell?

by Ramith Jayatilleka at October 31, 2014 01:46 PM


Does an abstract machine only have one language? [on hold]

An abstract machine has a language which contains all the strings it recognize. In that sense, is it correct that an abstract machine has only one language?

On a computer, there is

  • a (unique?) machine language,
  • a (non-unique?) assembly language, and
  • multiple high-level programming languages (e.g. C, Java, Python).

Do these language belong to different abstract machines, or do they all belong to the same abstract machine? E.g. Do the machine language, the assembly language, C, Java, Python belong to the same abstract machine? Or because an abstract machine can only have one language, the machine language, the assembly language, C, Java, Python belong to different abstract machines?


by Tim at October 31, 2014 01:45 PM


Best way to group adjacent array items by value

Assume we have an array of values:

[5, 5, 3, 5, 3, 3]

What is the best way to group them by value and adjacency. The result should be as follows:

[ [5,5], [3], [5], [3,3] ]

Of course, I can loop through the source array and look for the next/previous item, and if they are the same, push them to a temporary array that will be then pushed to the resulting array.

But I like to write code in functional way. So maybe there could be a better way?

by Girafa at October 31, 2014 01:42 PM



6-coloring of a tree in a distributed manner

I have some difficulties in understanding distributed algorithm for tree 6 - coloring in $O(\log^*n)$ time.

The full description can be found in following paper: Parallel Symmetry-Breaking in Sparse Graphs. Goldberg, Plotkin, Shannon.

In short, the idea is ...

Starting from the valid coloring given by the processor ID's, the procedure iteratively reduces the number of bits in the color descriptions by recoloring each nonroot node $v$ with the color obtained by concatenating the index of a bit in which $C_v$ differs from $C_{parent}(v)$ and the value of this bit. The root $r$ concatenates $0$ and $C_r[0]$ to form its new color.

The algorithm terminates after $O(\log^*n)$ iterations.

I don' have the intuitive understanding why it's actually terminates in $O(\log^*n)$ iterations. As it's mentioned in the paper on the final iteration there is the smallest index where two bit string differs is at most 3. So 0th bit and 1th bit could be the same and $2^2=4$, so this two bit will give us 4 colors + another 2 colors for different 3th bit, and in total 8 colors and not 6 like in the paper, and why we cannot proceed further with 2 bits, it's still possible to find different bits and separate them.

I would appreciate a little bit deeper analysis of the algorithm than in the paper.

by fog at October 31, 2014 01:37 PM


Replicating portfolio and risk-neutral pricing for interest rate options

For equity options, the pricing of options depends on the existence of a replicating portfolio, so you can price the option as the constituents of that replicating portfolio. However, I am not seeing how the same analysis can be applied to value interest rate options. Does the concept of replication apply to interest rate derivatives? If so, what would a replicating portfolio look like?

by ezbentley at October 31, 2014 01:37 PM




What is the difference between using the return statement and defaulting to return the last value?

I am learning Scala and I noticed something about using the return statement.

So, obviously in Scala, if you don't have a return statement, the last value is returned by default. Which is great. But if you use the return statement without specifying the return type, Scala says "error: method testMethod has return statement; needs result type"

So this works

  def testMethod(arg: Int) = {

but this gives the error

  def testMethod(arg: Int) = {
    return arg*2

This makes me scratch my chin and go

Mmmmmmmm... There must be a reason for that.

Why is the explicit type declaration needed when you use a return statement but not when you let Scala return the last value? I assumed they were exactly the same thing, and that return statements are just for if you want to return a value inside a nested function/conditional, etc. (In other words, that a "return" statement is automatically inserted to your last value by the compiler.. if not present anywhere else in the method)

But clearly I was wrong. Surely there must be some other difference in the implementation?

Am I missing something?

by Marco Prins at October 31, 2014 01:23 PM

How to read a text file with mixed encodings in Scala or Java?

I am trying to parse a CSV file, ideally using weka.core.converters.CSVLoader. However the file I have is not a valid UTF-8 file. It is mostly a UTF-8 file but some of the field values are in different encodings, so there is no encoding in which the whole file is valid, but I need to parse it anyway. Apart from using java libraries like Weka, I am mainly working in Scala. I am not even able to read the file usin For example



    java.nio.charset.MalformedInputException: Input length = 1
at java.nio.charset.CoderResult.throwException(
at sun.nio.cs.StreamDecoder.implRead(
at scala.collection.Iterator$$anon$
at scala.collection.Iterator$$anon$25.hasNext(Iterator.scala:562)
at scala.collection.Iterator$$anon$19.hasNext(Iterator.scala:400)
at scala.collection.Iterator$class.foreach(Iterator.scala:772)

I am perfectly happy to throw all the invalid characters away or replace them with some dummy. I am going to have lots of text like this to process in various ways and may need to pass the data to various third party libraries. An ideal solution would be some kind of global setting that would cause all the low level java libraries to ignore invalid bytes in text, so that that I can call third party libraries on this data without modification.


import java.nio.charset.CodingErrorAction

implicit val codec = Codec("UTF-8")

val src = Source.

Thanks to +Esailija for pointing me in the right direction. This lead me to How to detect illegal UTF-8 byte sequences to replace them in java inputstream? which provides the core java solution. In Scala I can make this the default behaviour by making the codec implicit. I think I can make it the default behaviour for the entire package by putting it the implicit codec definition in the package object.

by Daniel Mahler at October 31, 2014 01:21 PM


Why does most cryptography depend on large prime number pairs, as opposed to other problems?

Most current cryptography methods depend on the difficulty of factoring numbers that are the product of two large prime numbers. As I understand it, that is difficult only as long as the method used to generate the large primes cannot be used as a shortcut to factoring the resulting composite number (and that factoring large numbers itself is difficult).

It looks like mathematicians find better shortcuts from time to time, and encryption systems have to be upgraded periodically as a result. (There's also the possibility that quantum computing will eventually make factorization a much easier problem, but that's not going to catch anyone by surprise if the technology catches up with the theory.)

Some other problems are proven to be difficult. Two examples that come to mind are variations on the knapsack problem, and the traveling salesman problem.

I know that Merkle–Hellman has been broken, that Nasako–Murakami remains secure, and that knapsack problems may be resistant to quantum computing. (Thanks, Wikipedia.) I found nothing about using the traveling salesman problem for cryptography.

So, why do pairs of large primes seem to rule cryptography?

  • Is it simply because the it is currently easy to generate pairs of large primes that are easy to multiply but difficult to factor?
  • Is it because factoring pairs of large primes is proven to be difficult to a predictable degree that is good enough?
  • Are pairs of large primes useful in a way other than difficulty, such as the property of working for both encryption and cryptographic signing?
  • Is the problem of generating problem sets for each of the other problem types that are difficult enough for the cryptographic purpose itself too difficult to be practical?
  • Are the properties of other problem types insufficiently studied to be trusted?
  • Other.

by Steve at October 31, 2014 01:20 PM



SBT not resolving sbt-heroku plugin dependency

I'm moving my app from RUN@Cloud to Heroku. I try to deploy the app to Heroku using sbt-heroku plugin. The dependency doesn't want to resolve.

[warn]  ::::::::::::::::::::::::::::::::::::::::::::::
[warn]  ::          UNRESOLVED DEPENDENCIES         ::
[warn]  ::::::::::::::::::::::::::::::::::::::::::::::
[warn]  :: com.heroku#sbt-heroku;0.1.5: not found
[warn]  ::::::::::::::::::::::::::::::::::::::::::::::

sbt version is 0.13.1. The content of Build.scala file:

object ApplicationBuild extends Build {

  val appName = "happymelly-teller"
  val appVersion = "1.0-SNAPSHOT"

  val appDependencies = Seq(
    "be.objectify" %% "deadbolt-scala" % "2.2-RC2",
    "com.andersen-gott" %% "scravatar" % "1.0.3",
    "com.github.tototoshi" %% "slick-joda-mapper" % "0.4.0",
    "" %% "play-slick" % "",
    "com.heroku" % "sbt-heroku" % "0.1.5",
    //"com.typesafe.slick" %% "slick" % "1.0.1",
    "mysql" % "mysql-connector-java" % "5.1.27",
    "org.apache.poi" % "poi" % "3.9",
    "org.apache.poi" % "poi-ooxml" % "3.9",
    "org.joda" % "joda-money" % "0.9",
    "org.pegdown" % "pegdown" % "1.4.2",
    "org.planet42" %% "laika-core" % "0.5.0",
    "org.jsoup" % "jsoup" % "1.7.3",
    // update selenium to avoid browser test to hang
    "org.seleniumhq.selenium" % "selenium-java" % "2.39.0",
    "ws.securesocial" %% "securesocial" % "2.1.3",
    "nl.rhinofly" %% "play-s3" % "3.3.3"

  val main = play.Project(appName, appVersion, appDependencies).settings(scalariformSettings: _*).settings(
    resolvers += Resolver.url("heroku-sbt-plugin-releases", url(""))(Resolver.ivyStylePatterns),
    resolvers += Resolver.url("sbt-plugin-releases", url(""))(Resolver.ivyStylePatterns),
    resolvers += Resolver.url("Objectify Play Snapshot Repository", url(""))(Resolver.ivyStylePatterns),
    resolvers += Resolver.url("Objectify Play Repository", url(""))(Resolver.ivyStylePatterns),
    resolvers += "Rhinofly Internal Repository" at "",
    resolvers += Resolver.sonatypeRepo("releases"),
    routesImport += "binders._"
      /* Scalariform: override default settings - no spaces within pattern binders is the only option in IntelliJ IDEA,
        preserve spaces before arguments is needed for infix function syntax (unconfirmed).*/
      ScalariformKeys.preferences := FormattingPreferences().
        setPreference(SpacesWithinPatternBinders, false).
        setPreference(RewriteArrowSymbols, true).
        setPreference(PreserveSpaceBeforeArguments, true)
      // Avoid building Scaladocs and sources to reduce build time.
    ).settings(sources in(Compile, doc) := Seq.empty
    ).settings(publishArtifact in(Compile, packageDoc) := false
    ).settings(publishArtifact in(Compile, packageSrc) := false)


I tried several versions of the plugin (not only 0.1.5, but 0.1.4 and 0.1.3 also) without any success. Did anyone have this issue?

by sery0ga at October 31, 2014 01:16 PM

Token based authentication in Play filter & passing objects along

I've written an API based on Play with Scala and I'm quite happy with the results. I'm in a stage where I'm looking at optimising and refactoring the code for the next version of the API and I had a few questions, the most pressing of which is authentication and the way I manage authentication.

The product I've written deals with businesses, so exposing Username + Password with each request, or maintaining sessions on the server side weren't the best options. So here's how authentication works for my application: User authenticates with username/password. Server returns a token associated with the user (stored as a column in the user table) Each request made to the server from that point must contain a token. Token is changed when a user logs out, and also periodically. Now, my implementation of this is quite straightforward – I have several forms for all the API endpoints, each one of which expects a token against which it looks up the user and then evaluates if the user is allowed to make the change in the request, or get the data. So each of the forms in the authenticated realm are forms that need a token, and then several other parameters depending on the API endpoint.

What this causes is repetition. Each one of the forms that I'm using has to have a verification part based on the token. And its obviously not the briefest way to go about it. I keep needing to replicate the same kind of code over and over again.

I've been reading up about Play filters and have a few questions: Is token based authentication using Play filters a good idea? Can a filter not be applied for a certain request path? If I look up a user based on the supplied token in a filter, can the looked up user object be passed on to the action so that we don't end up repeating the lookup for that request? (see example of how I'm approaching this situation.)

  case class ItemDelete(usrToken: String, id: Long) {
    var usr: User = null
    var item: Item = null

  val itemDeleteForm = Form(
      "token" -> nonEmptyText,
      "id" -> longNumber
    ) (ItemDelete.apply)(ItemDelete.unapply)
      del => {
        del.usr = User.authenticateByToken(del.usrToken)
        del.usr match {
          case null => false
          case _ => true
    verifying("no such item",
      del => {
        if (del.usr == null) false
          .eq("companyId", del.usr.companyId) // reusing the 'usr' object, avoiding multiple db lookups
        .findList.toList match {
          case Nil => false
          case List(item, _*) => {
            del.item = item

by Ashesh at October 31, 2014 01:16 PM


Kurze Durchsage zur PKW-Maut:Eine Verwendung für andere ...

Kurze Durchsage zur PKW-Maut:
Eine Verwendung für andere Zwecke, etwa Verkehrsüberwachung oder Personenfahndung, ist derzeit vom geplanten Gesetz nicht vorgesehen und untersagt.
Ach so! Derzeit. Vom geplanten Gesetz nicht vorgesehen! Na DANN ist ja alles gut!1!!

October 31, 2014 01:01 PM

First Look scheint Probleme zu haben. Die hatten Matt ...

First Look scheint Probleme zu haben. Die hatten Matt Taibbi vor sieben Monaten eingestellt, damit er bei ihnen ein Magazin macht. Matt ist mit seiner Berichterstattung über die Finanzkrise beim Rolling Stone positiv aufgefallen, und ich habe ihn auch häufig verlinkt. Der ist jetzt anscheinend abgesprungen.

October 31, 2014 01:01 PM


Scala: reading lines from file with curried function in 'foreach'?

Reading lines from file I try to use curried function in 'foreach' :

object CurriedTest {

  def main(args: Array[String]): Unit = {
    val lst = List ("x", "y", "z")
    lst.foreach(fun("000")("111") ) 
    Source.fromFile(args(0)).getLines.foreach(fun("AAA")("BBB") _)    

  def fun (a1: String) (a2:String) (a3: String) = {
    println("a1: "+a1+" a2: "+a2+" a3: "+a3)

  def fun2 = fun("one")("two") _

For some reason neither of the following statements:

    Source.fromFile(args(0)).getLines.foreach(fun("AAA")("BBB") _)    

produce any output. Why?

by DarqMoth at October 31, 2014 12:56 PM

Dave Winer

Apple software gets worse

I've been having trouble with wifi on my iPad ever since I did the 8.0 upgrade, then the 8.0.1 upgrade. Before that, it worked fine, except I couldn't log on to my Facebook account through the Facebook app, I had to go in through the web. It was never able to "connect to the server," through the system settings. I tried everything, including completely flattening the system, and starting over. I accepted the lower functionality. None of my other iOS devices had this problem.

So I decided to upgrade to 8.1 today to see if we could get rid of the wifi problem. After failing to verify the update twice, I rebooted the iPad, something that cures a lot of its problems, but now it asked me to connect the iPad up to iTunes. So I did. It said the iPad was in "recovery mode" and I had to do a fresh install of the OS. I tried two more times, got the same result. So I did a fresh install. Then restored from a backup (no indication what the dates are on the backups, wouldn't it be nice if it just offered the latest, and options to get even older ones).

Now I have an iPad with none of my data on it. No I don't trust iCloud (do you blame me).

The commercials are still funny, but in a way not, because today's Apple software is so much like the PC they used to ridicule. There really is no silver bullet in software, you just have to test, and be conservative in your changes, if you want to keep from breaking your users.

PS: The Facebook app still doesn't work.

October 31, 2014 12:46 PM



Examples of Long-Standing Conjectures later trivially proved by an implication

I'd like to know if there have been conjectures that have long been unproven in TCS, that were later proven by an implication from another theorem, that may have been easier to prove.

by Ryan at October 31, 2014 12:40 PM


DFA for every run of a's=2 or 3

I am trying to create a dfa for L={w: every run of a's has length either two or three}

this is my attempt at the solution..i feel like I am missing something..?

enter image description here

by matt mowris at October 31, 2014 12:37 PM


HAR-RV, realized GARCH and HEAVY model for realized volatility

I don't have much experience with volatility modeling using intraday data but I'm in the process of collecting 5mins data. Currently I have ~6 months of data. Is it enough to use these models with such short history? Which one should perform better out of sample having this small amount of data?

by opt at October 31, 2014 12:37 PM



Distributed push relabel with changing graph topology

There is at least one distributed version propsosed for the push-relabel maximum-flow algorithm. I wonder if and how this algorithm can cope with nodes leaving or enterig the graph during runtime. Is there any work about this?

by jederik at October 31, 2014 12:23 PM


Scala "def" method declaration: Colon vs equals

I am at the early stages of learning Scala and I've noticed different ways to declare methods.

I've established that not using an equals sign makes the method a void method (returning a Unit instead of a value), and using an equals sign returns the actual value so

def product(x: Int, y: Int) {

will return () (Unit), but

def product(x: Int, y: Int) = {

will return the product of the two arguments(x*y)

I've noticed a third way of declaring methods - with a colon. Here is an example

def isEqual(x: Any): Boolean

How does this differ from the = notation? And in what situations will it be best to use this way instead?

by Marco Prins at October 31, 2014 12:14 PM

Scala vs F# on List range from 1 to 100000000

I have written a list manipulation function both on F# and Scala to compare the performance. To test that function I need to initialize a List from 1 to 100000000.


let l = [1..100000000];;

Real: 00:00:32.954, CPU: 00:00:34.593, GC gen0: 1030, gen1: 520, gen2: 9

This works.

Scala: Scala -J-Xmx2G option

val l = (1 to 10000000).toList // works

val l = (1 to 100000000).toList // no response long while and finally got java.lang.OutOfMemoryError: Java heap space

With 100000000 (100,000,000), no response for a long while (an hour) with 75% to 90% CPU utilization and 2GB memory utilization and finally got java.lang.OutOfMemoryError: Java heap space.

Am I doing anything wrong in Scala?

by M Sheik Uduman Ali at October 31, 2014 12:14 PM


Equivalence of definitions of balanced parentheses strings

The strings of balanced parentheses can be defined in at least two ways.

  1. A sring w over alphabet {(,)} is balanced IFF:
    a) w has an equal number of ('s and )'s and
    b) any prefix of w has at least as many ('s as )'s.
  2. a) $\epsilon$ is balanced.
    b) If w is a balanced string, then (w) is balanced.
    c) If w and x are balanced strings, then so is wx.
    d) Nothing else is a balanced string.

Proof by induction on the length of strings that definitions (1) and (2) define the same class of strings.

My proof:
Basis: |x|=0, then x=$\epsilon$, which satisfies both definitions.

Continue by induction on length of x satisfying (1)

For |x|>0, if x is balanced by (1) definition, then we select the SMALLEST prefix p that has equal number of '( and '). Thus p satisfies 1.a and 1.b (because prefix of p is a prefix of x).
Residual suffix s also satisfies (1). This is because any prefix of s has the same properties separately as together with balanced p to his left (thus becomming prefix of x). Consider two cases:

  • a. |p|<|x|, |s|<|x| then by Inductive Hypothesis p and s are both balanced by (2) definition, and x=wc is balanced by (2)
  • b. p=x: because there is no prefix, except whole string x, that has equal number of ('s and )'s. Then by 1.b every prefix, except whole string, has strictly more ('s then )'s. Thus, string w created by removing first (' and last )' satisfies (1). |w|<|x|, by Inductive Hypothesis w satisfies (2) and thus x=(w) is also (2).

Continue by induction on length of x satisfying (2)
If |x|>0 and x is balanced by (2) definition, then from 2.d two cases may take place:

  1. x=(w), where w is balanced by (2), thus by Inductive Hypothesis w is balanced by (1). Adding (' in the begining of w does not violate 1.b property, finally adding )' to the end fullfils 1.a condition. Thus x is balanced by (1).
  2. x=ps, where p and s are balanced by (2), thus by Inductive Hypothesis p and s are balanced by (1). Prefix of x is either prefix of p (which has already satisfied 1.b) or p (balanced) in concatenation with prefix of s (also fullfils 1.b). 1.a property x holds too. Thus x is balanced by (1).

Please, check correctness of my proof or advice better one.

by user102417 at October 31, 2014 12:12 PM


Pricing defaltable binary option by hazard rate approach

I'm studding defaultable claims and I asked myself the question of pricing a digital payoff.

Consider an option paying $1$ at maturity in case of non-default before maturity and if a given underlying process $S$ touch a barrier $B$ otherwise it pays $0$.

My idea was to approach it via the hazard rate formulation. Since the non-arbitrage price is given by (where the expected value is evaluated

$$D(t,T)= \mathbb E_t\left[\exp(-\int_t^Tr_u du) \mathbf 1_{\{ \tau>T\}}\mathbf 1_{\{ \tau_B \leq T\}}\right] $$

We assume the touch time $\tau_B$ and default time $\tau$ independent and $(r_u)_{u\geq0}$ determinist, so we have

$$D(t,T)= \exp(-\int_t^T(r_u +h_u )~du)\mathbb P_t\left[ \tau_B \leq T\right] $$ where $(h_u)_{u\geq0}$ is the hazard rate associated to default.

Now consider another option paying $1$ at dates $0<T_1< T_2 < ... < T_N =T $ (all precised before start of the contract) until no default is observed afeter the default date no other cash-flow will be payed and also until a given underlying process $S$ don't touch a barrier $B$ after that no other cash-flow will be payed as well.

Therefore the payoff is given by $$ \Pi(t,T) = \sum_{i=1}^N\exp(-\int_t^Tr_u du) \mathbf 1_{\{ \tau>T_i\}}\mathbf 1_{\{ \tau_B>T_i\}}$$ and the price $$ P(t,T) = \sum_{i=1}^N\exp(-\int_t^{T_i}(r_u +h_u )~du)\mathbb P_t\left[ \tau_B > T_i\right] $$

Could someone give criticize my approach please? Many thanks

by Paul at October 31, 2014 12:03 PM



Scala: curried function in foreach?

I am trying to use curried function when iterating collection with 'foreach' method:

object CurriedTest {

  def main(args: Array[String]): Unit = {
    val lst = List ("x", "y", "z")
    lst.foreach(fun("one"),("two") _) 


  def fun (a1: String) (a2:String) (a3: String) = {
    println("a1: "+a1+" a2: "+a2+" a3: "+a3)

  def fun2 = fun("one")("two") _

Why line 'lst.foreach(fun("one"),("two") _) ' does not compile and return:

- too many arguments for method foreach: (f: String => B)Unit

error message?

by DarqMoth at October 31, 2014 11:48 AM

Clojure backtick expansion

According to the Learning Clojure wikibook backticks are expanded as follows

`(x1 x2 x3 ... xn)

is interpreted to mean

(clojure.core/seq (clojure.core/concat |x1| |x2| |x3| ... |xn|))

Why wrap concat with seq? What difference does it make?

by Matthew Molloy at October 31, 2014 11:45 AM


Efficient algorithms for vertical visibility problem

During thinking on one problem, I realised that I need to create an efficient algorithm solving the following task:

The problem: we are given a two-dimensional square box of side $n$ whose sides are parallel to the axes. We can look into it through the top. However, there are also $m$ horizontal segments. Each segment has an integer $y$-coordinate ($0 \le y \le n$) and $x$-coordinates ($0 \le x_1 < x_2 \le n$) and connects points $(x_1,y)$ and $(x_2,y)$ (look at the picture below).

We would like to know, for each unit segment on the top of the box, how deep can we look vertically inside the box if we look through this segment.

Formally, for $x \in \{0,\dots,n-1\}$, we would like to find $\max_{i:\ [x,x+1]\subseteq[x_{1,i},x_{2,i}]} y_i$.

Example: given $n=9$ and $m=7$ segments located as in the picture below, the result is $(5, 5, 5, 3, 8, 3, 7, 8, 7)$. Look at how deep light can go into the box.

Seven segments; the shaded part indicates the region which can be reached by light

Fortunately for us, both $n$ and $m$ are quite small and we can do the computations off-line.

The easiest algorithm solving this problem is brute-force: for each segment traverse the whole array and update it where necessary. However, it gives us not very impressive $O(mn)$.

A great improvement is to use a segment tree which is able to maximize values on the segment during the query and to read the final values. I won't describe it further, but we see that the time complexity is $O((m+n) \log n)$.

However, I came up with a faster algorithm:


  1. Sort the segments in decreasing order of $y$-coordinate (linear time using a variation of counting sort). Now note that if any $x$-unit segment has been covered by any segment before, no following segment can bound the light beam going through this $x$-unit segment anymore. Then we will do a line sweep from the top to the bottom of the box.

  2. Now let's introduce some definitions: $x$-unit segment is an imaginary horizontal segment on the sweep whose $x$-coordinates are integers and whose length is 1. Each segment during the sweeping process may be either unmarked (that is, a light beam going from the top of the box can reach this segment) or marked (opposite case). Consider a $x$-unit segment with $x_1=n$, $x_2=n+1$ always unmarked. Let's also introduce sets $S_0=\{0\}, S_1=\{1\}, \dots, S_n=\{n\}$. Each set will contain a whole sequence of consecutive marked $x$-unit segments (if there are any) with a following unmarked segment.

  3. We need a data structure that is able to operate on these segments and sets efficiently. We will use a find-union structure extended by a field holding the maximum $x$-unit segment index (index of the unmarked segment).

  4. Now we can handle the segments efficiently. Let's say we're now considering $i$-th segment in order (call it "query"), which begins in $x_1$ and ends in $x_2$. We need to find all the unmarked $x$-unit segments which are contained inside $i$-th segment (these are exactly the segments on which the light beam will end its way). We will do the following: firstly, we find the first unmarked segment inside the query (Find the representative of the set in which $x_1$ is contained and get the max index of this set, which is the unmarked segment by definition). Then this index $x$ is inside the query, add it to the result (the result for this segment is $y$) and mark this index (Union sets containing $x$ and $x+1$). Then repeat this procedure until we find all unmarked segments, that is, next Find query gives us index $x \ge x_2$.

Note that each find-union operation will be done in only two cases: either we begin considering a segment (which can happen $m$ times) or we've just marked a $x$-unit segment (this can happen $n$ times). Thus overall complexity is $O((n+m)\alpha(n))$ ($\alpha$ is an inverse Ackermann function). If something is not clear, I can elaborate more on this. Maybe I'll be able to add some pictures if I have some time.

Now I reached "the wall". I can't come up with a linear algorithm, though it seems there should be one. So, I have two questions:

  • Is there a linear-time algorithm (that is, $O(n+m)$) solving the horizontal segment visibility problem?
  • If not, what is the proof that the visibility problem is $\omega(n+m)$?

by mnbvmar at October 31, 2014 11:37 AM


AT1 ratio, Core T1 ration and CET1 ratio

I would like to first know the precise definition of each one of those 3 ratios as well as there differences. On the web there is bit of a mess on the explanations. I could not find a simple and clear systematic presentation of those 3 without mixing the terms and others synonyms. That let me even more lost.

Besides that it would be great if someone could explain me in a nutshell the main issues, context in which each regulatory ratio is inserted in and so there reason to exist.

Please be free to recommend me some reference lectures on the subject. Especially those for beginners on regulation subjects.

Many thanks

by Paul at October 31, 2014 11:31 AM


Fred Wilson

Fun Friday: How Do You Message On Your Phone?

It’s time for a fun friday.

I want to know how people message on their phones.

Here’s how I do it:

Kik – I use Kik to message most of my family, my co-workers, and a few friends

iMessage – I have to say that iMessage is great. I use it to message with my daughter Emily and many people I work with. I think of iMessage as SMS+ and it’s pretty great.

Hangouts – My older daughter Jessica often will send me a Hangouts message. I think she does that when she’s at her computer and she isn’t sure if I’m on my phone or on my computer. Some of the people I work with will sometimes do that too.

I would say I use Kik about 60-70% of the time, iMessage 30% of the time, and Hangouts the rest.

How about you?

I’ve created a poll to make collecting this info easy, but I’m also interested in the color around this topic which should make good fodder for the comments.

<noscript><a href="">What Messengers Do You Use?</a></noscript>

by Fred Wilson at October 31, 2014 11:14 AM


minimize cost and maximize quality of function in four variables

Problem: Find optimal values for four parameters which are used to tune an algorithm under the constraints of accuracy, time and memory.

I have an indexing scheme used to search for the K-nearest neighbors of a point in many dimensions. This indexing scheme has four adjustable parameters. For each given fixed problem size (total number of points, number of nearest neighbors to find, number of dimensions, desired accuracy) I plan to run a solver to find the optimal values for these four parameters. Here are the constraints:

1) If the accuracy is below desired accuracy (say 95%), prefer parameter values that maximize the accuracy.

2) Once the accuracy equals or exceeds the desired accuracy, prefer parameters that minimize the execution time.

3) Do not exhaust all system memory.

I handle #3 by restricting the one parameter that is directly proportional to memory usage based on numerous manual tests.

One of the parameters is discrete: it may only take on integer values. (This is the one that affects memory size).

Three of the parameters always move in the same direction as accuracy: as they increase, accuracy always remains the same or increases, up to a maximum of 100%. Each of these three parameters also moves in the same direction as cost: as they increase, the cost always increases (without bound).

The fourth parameter has the smallest (and least predictable) influence on cost. The accuracy will always decrease as you move farther away from the optimal value, either up or down.

Because of the 100% accuracy cutoff, the function has places with no derivative (is not smooth).

I have hundreds of combinations to optimize, varying # of dimensions versus # of points. My current approach runs all night without completing all the cases and produces erratic results (poor solver algorithm that gravitates toward high accuracy and horrible time cost). Given the nature of the constraints I have described and the number of parameters, which class of algorithms should I investigate? I understand the gradient method, but worry that it will perform poorly (too many steps) or not work with the non-smoothness of the function.

It might be possible to seed each successively larger case with the best results from the next smaller case; the parameter values should vary smoothly as the case sizes increase. the cost of evaluating the execution time and accuracy increases with the number of points and with the number of dimensions.

Current algorithm:

1) Start with seed values for each parameter, as determined after days of manual testing.

2) Holding all parameters fixed but one and varying that one, find its best value. Test values in increasing order, stopping when the accuracy is acceptable.

3) Repeat for all parameters.

4) Make five passes over all parameters. At each pass, use a finer grid for the search.

My approach is basically using an inefficient gradient search in one dimension at a time. It is slow and produces good results for the smaller cases but soon yields poor results for the medium sized cases.

The Four adjustable parameters

In thinking about them, I have decided that three of the parameters can be represented as discrete integer values, and only one is continuous.

a) Perms = Number of permutations of the index to create. Each index takes up memory. This parameter has the dominant influence on memory usage. This ranges from 3 to 20. Depending on the number of points indexed, somewhere between 20 and 32 indices will use up all memory.

b) Margin = Number of points to select in each index probe. If K is the number of neighbors sought, Margin typically varies from 3*K to 20*K.

c) Box-Points = Number of random faces of a hypercube surrounding the search point to probe. This ranges from 1 to 2*dimensions.

d) Box-Size = A fraction to multiply by the characteristic hypercube size to define the volume of hyperspace to search. Typically ranges from 0.3 to 1.5.

by Paul Chernoch at October 31, 2014 11:12 AM


What is the best way to defer a Message?

I have an actor "ItemProvider" which can recceive a "getItems" message. A ItemProvider manages Items of a project. So I can have several "getItems" messages requesting Items for project A and other "getItems" messages requesting Items for project B.

The first time "itemProvider" gets such a message it needs to call a service to actually get the items (this can take a minute the service returns a future so it won't block the actor) . During this wait period other "getItems" messages can arrive.

A project "ItemProvider" caches the "Items" it receives from the service. So after the 1 minute loading time it can serve the items instantly.

I am pretty sure the "ItemProvider" should use Akka's become feature. But how should it handle the clients it can not serve right away?

I can think of the following options:

  1. ItemProvider holds a List pendingMessages. And the messages it cannot serve are added to this list. When ItemProvider is "ready" it will handle the pending clients

  2. ItemProvider sends the message back to its parent. And the parent will reissue the message

  3. ItemProvider uses the scheduler. And gets the message in the future again.

  4. Maybe not use becaome but use the AbstractFSM class?

(I am using java)

Does anybody know the best Akka way to implement ItemProvider?

by jack at October 31, 2014 11:11 AM


How is a Turing Test defined?

Turing Test definition taken from wikipedia:

The Turing test is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. In the original illustrative example, a human judge engages in natural language conversations with a human and a machine designed to generate performance indistinguishable from that of a human being. All participants are separated from one another. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test. The test does not check the ability to give the correct answer to questions; it checks how closely the answer resembles typical human answers. The conversation is limited to a text-only channel such as a computer keyboard and screen so that the result is not dependent on the machine's ability to render words into audio

But how is a Turing Test defined?

What type of questions are considered to be so good that trick the human judge?

What and who defines a human judge suitable to be a judge? (for example if the judge is 5 year old wouldn't he be more easily tricked than a 50 year old computer scientist?)

All those chatbots are considered to pass the Turing Test? Most people are not sure if the chat bot is an actual bot or not.

by John Demetriou at October 31, 2014 11:08 AM

Portland Pattern Repository



Guava Function<> with void return value?

Does Googe Guava for Java have a Function inner class with a void return value, like C#'s action? I'm tired of making a bunch of Function<Float, Integer> with meaningless return values.

by Rosarch at October 31, 2014 10:35 AM


What is an efficient method to find implied volatility?

I have a code that finds the implied volatility using the Newton-Raphson method.

I set the number of trial to 1000 but sometimes it fails to converge and doesn't find the result.

Is there a better method to find the result? Are there any technical conditions in which this numerical method is expected to fail to converge to the solution?

Here is the C# code:

    public double findIV(double S, double K, double r, double time, string type, double optionPrice)
        int trial= 1000;
        double ACCURACY = 1.0e-5;
        double t_sqrt = Math.Sqrt(time);

        double sigma = (optionPrice / S) / (0.398 * t_sqrt);    // find initial value  
        for (int i = 0; i < trial; i++)
            Option myCurrentOpt = new Option(type, S, K, time, r, 0, sigma); // create an Option object
            double price = myCurrentOpt.BlackScholes();
            double diff = optionPrice - price;
            if (Math.Abs(diff) < ACCURACY)
                return sigma;
            double d1 = (Math.Log(S / K) + r * time) / (sigma * t_sqrt) + 0.5 * sigma * t_sqrt;
            double vega = S * t_sqrt * ND(d1);
            sigma = sigma + diff / vega;
        throw new Exception("Failed to converge."); 

    public double ND(double X)
        return (1.0 / Math.Sqrt(2.0 * Math.PI)) * Math.Exp(-0.5 * X * X);

by opt at October 31, 2014 10:27 AM


Questions on Scala performance tunning

I am new to Scala, just wrote this program:

def isPrime(num: Int, primes: scala.collection.mutable.MutableList[Int]): Boolean = {

def primeListUnder(upper: Int) = { 
  val primes=scala.collection.mutable.MutableList(2,3); 
  var num=4
  while(num<upper) {

This is trying to get all the prime numbers under some specified upper bound. But it runs really slow. Any suggestions?

by Stanley Shi at October 31, 2014 10:22 AM


References on callable bond's pricing

I am searching for references on pricing callable bonds.

I've not find any rigorous mathematical approach on the web. All I found was some soft approaches in a discrete framework.

Could someone help with that please?

by Paul at October 31, 2014 10:19 AM

What is the formula for beta weighted delta and gamma?

I am trying to calculate the beta weighted delta and gamma for a portfolio of options of different underlying stocks, but I can't seem to find the correct formula.

Can someone point me to it or a book that contains it?

by CptanPanic at October 31, 2014 10:17 AM


While writing out the cores (to make parsing table) for SLR(1) and CLR(1) grammars why do we write lookaheads for only CLR?

I mean both SLR(1) and CLR(1) can look 1 symbol ahead. Why do we consider that lookahead only in CLR and not in SLR while drawing cores to make the table

by Prateek at October 31, 2014 10:15 AM

Big-O Notation for Menezes-Vanstone Elliptic Curve Cryptography?

I need someone help me about . how can compute time complexity for this algorithm (Menezes-Vanstone Elliptic Curve Cryptography). I have spent much time reading journals and papers but as yet have been unable to find any record of that performance complexity. It is known that the algorithm is encryption function is :

$C1 = k1 * m1$ mod p

$C2 = k2 * m2$ mod p

The decryption function is:

$m1 = C1 * k1^ {-1}$ mod p.

$m2 = C2 * k2^{ -1}$ mod p.

I think the encryption function it take $O(N)^{2}$

where $T(C1) = O(log n)^2$ bit operations.

$T(C2) = O(log n)^2$ bit operations.

and the decryption function it take $O(N)^{3}$. where

$T(m1) = O(log n)^2 + T(k1^{-1})$.

$T(k1^{-1}) = O(log n)^3$, by extend Euclid’s method.

$T(m1) = O(log n)^2 + O(log n)^3$ bit operations.

$T(m2) = O(log n)^2 + O(log n)^3$ bit operations.

Is that true?.

by mahmoud_zaiin at October 31, 2014 10:03 AM

Planet Clojure

Clojure Weekly, Oct 31st, 2014

Welcome to another issue of Clojure Weekly Halloween Edition! Here I collect a few links, normally 4/5 urls, pointing at articles, docs, screencasts, podcasts and anything else that attracts my attention in the clojure-sphere. I add a small comment so you can decide if you want to look at the whole thing or not. That’s it, enjoy! Here are the glorious Clojure charts and text responses. About 1400 people replied the questions for Clojure, 650 for ClojureScript. Notable results: adoption in prod environment is increasing, usage is mainly for webapp, usage for ETL and big-data is decreasing. As of new features, quicker startup mode for clj dev is at the top (along with the usual better stack traces). Much more insights if you care to dig deeper.

bean - Clojure stdlib The bean function is quite an handy one. Just feed it with a Java object to transform its public fields into a read-only Clojure map. Use when you have interop needs for value objects that need to be accessed frequently in read-only mode. If you need to write, consider wrapping everything in a defmutable (read below).

defmutable · reborg/reclojure This macro is a thin wrapper around deftype. Deftype by default creates an immutable non-associative Clojure structure which is good for 90% of the cases. In Java interop contexts though, you may need something mutable and Clojure allows to define mutable deftype attributes as well with :volatile-mutable. So far so good, but then you need to provide also “setters” for each one of the mutable field. This macro automates everything for you: (defmutable MyType [attr1 attr2]) and then read with (.attr1 (MyType. :a :b)) and write with (.attr1! (MyType. :a :b) :c).

sunng87/slacker Slacker exposes a Clojure namespace as an RPC service. It can do that in different ways and by default it will go binary protocol (with some options as of the serialization format). Other options are over HTTP or Ring compatible mode. It could be a good option if you’re searching for fast microservice development, without necessarily going through REST-ish things.

ztellman/potemkin Some little treasures here. It is at the end the usual “utils” library but with a twist around creation of types and interfaces. import-vars is useful tool from the point of view of the library implementors, to keep all nice and isolated ns while developing and then torw everything in the same bucket at the end. def-derived-map transform normal objects into map-like objects. deftype+ supports def-abstract-type to create types that shares functions (missing by design with deftype). If you do a lot of interop you don’t want to miss this lib.

danielsz/system Interesting. It would be good if one opinionated collection of components built on top of Stuart Sierra’s component lib would emerge and received collective love by the community. We also maintain and evolve our own collection ( but the dream of that being completely generic and community driven vanished a long ago. So, if you maintain your own collection, consider contributing back to this library instead. Unless there is even a better one already :)

Design Patterns: Happy Birthday and Goodbye A few concise good points about GoF. There still a lot of good wisdom in the original 23 GoF patterns (I have my copy in the bookshelf and you should have yours). But the heritage from C++ is making them gradually outdated. With dynamic languages and first class functions some of them are even useless. Agree, but won’t remove the book from my bookshelf :)

Tree visitors in Clojure Good article by Alex Miller, now Cognitect, of Strangeloop and The ConJ fame, about trees. He compares the OO approach with a few options in FP-land, including zippers. He’s also making use of a multimethod approach to visit tree nodes based on their type and take action. All illustrated with good examples and drawings.

max-key - Clojure stdlib More interesting bits from the standard library (they apparently never finish). max-key (and brother min-key) applies a function of your choice to a collection of elements, assuming the result of fn is a number. They order the results and return the element that generated the highest (or lowest respectively) result. Easy example: find the longest word with (max-key count [“asd” asdf” “a”]) => “asdf”

Clojure macro to create a synonym for a function - Stack Overflow Creating an alias of a function name is not frequent in Clojure as in Ruby. But the Ruby semantic is different. Aliasing a function in Ruby offers the possibility to immediately dispose of an old method implementation and replacing it with a new one without breaking compatibility and with the possibility to still use the old implementation internally, resulting in two function definition with the same name that don’t replace each other. In Clojure it would be somehow complicated to obtain a similar effect (if and ever you can’t really do without). With something simple as introduced in this SO question, you can at least have different names referring to the same function using just some syntactic sugar. Possible uses? Well say you are translating an app from Scheme where define is used instead of def. You could use a macro to have define to work as an alias for def from Clojure and copy paste throughout!

clojure macro to generate functions - Stack Overflow Here’s another of those tricks that I used all the time in Ruby land, generating functions on the fly that can then be invoked from other part of the application. In Clojure this can be done the dynamic way through “intern” and friends or with a macro. Both options are present in this nice answer by amalloy. When to use? When a set of repetitive functions need to be created, to avoid the actual “copy paste”. It might be a smell or not depending on the specific context, but I would keep the possibility in mind.

by Reblog at October 31, 2014 10:00 AM


why is Scala recommended over Java in Play Framework 2 [on hold]

After reading blogs/tutorials about Play Framework, it seems like Scala should be preferred on Java while creating Application in Play Framework 2.x. Are there any strong reasons for this?

by robo_here at October 31, 2014 09:55 AM

Map a Future for both Success and Failure

I have a Future[T] and I want to map the result, on both success and failure.

Eg, something like

val future = ... // Future[T]
val mapped = future.mapAll { 
  case Success(a) => "OK"
  case Failure(e) => "KO"

If I use map or flatmap, it will only map successes futures. If I use recover, it will only map failed futures. onComplete executes a callback but does not return a modified future. Transform will work, but takes 2 functions rather than a partial function, so is a bit uglier.

I know I could make a new Promise, and complete that with onComplete or onSuccess/onFailure, but I was hoping there was something I was missing that would allow me to do the above with a single PF.

by monkjack at October 31, 2014 09:54 AM

Convert Play Framework java Promise to Play Framework Scala Promise

I am currently building a Scala play framework app which uses a library that return results as F.Promise (Java Promise). Is there a way to convert F.Promises ( into Scala Promises or get the wrapped Scala Future out of the F.Promise?

The only way I see so far is getting the F.Promise but that is a blocking operation and I would like to continue working asynchronous.

The way descriped in the first answer led me to this code. Unfortunately I dont know how to define this F.Function correctly. The code does not compile.

Answer: So, I finally found out that the F.Promise has a method called wrapped(). And this method gives you a Scala Future back.

by MeiSign at October 31, 2014 09:51 AM


What is the set of operations of a Turing machine?


A typical abstract machine consists of a definition in terms of input, output, and the set of allowable operations used to turn the former into the latter.

The best-known example is the Turing machine.

  1. In a Turing machine, what is its set of operations?
  2. Is $\delta$ in the following definition of a Turing machine an operation? (If yes, then there is only one operation in a Turing machine?) Thanks.


    a (one-tape) Turing machine can be formally defined as a 7-tuple $M= \langle Q, \Gamma, b, \Sigma, \delta, q_0, F \rangle$ where

    $Q$ is a finite, non-empty set of states

    $\Gamma$ is a finite, non-empty set of tape alphabet symbols

    $b \in \Gamma$ is the blank symbol (the only symbol allowed to occur on the tape infinitely often at any step during the computation)

    $\Sigma\subseteq\Gamma\setminus\{b\}$ is the set of input symbols

    $q_0 \in Q$ is the initial state

    $F \subseteq Q$ is the set of final or accepting states.

    $\delta: (Q \setminus F) \times \Gamma \rightarrow Q \times \Gamma \times \{L,R\}$ is a partial function called the transition function, where L is left shift, R is right shift.

by Tim at October 31, 2014 09:50 AM


How to model the effect of earnings surprises on long-term returns?

I'm looking into modeling the relationship between EPS announcement surprises with long-term returns (1 quarter to 3 years with intervals). I've based my current methodology off papers looking at the short term effect (example) but I think that the long time horizon will require a more comprehensive solution.

My ultimate goal is to be able to say with some degree of certainty whether or not beating or missing analysts' EPS estimates has a long term effect on the performance of a stock.

I've set up a regression with variables as follows:

I've defined EPS announcement surprises as

$$ \text{SUPRISE}_i=\dfrac{\text{EPS}_{actual,i}-\text{EPS}_{estimate,i}}{\text{EPS}_{actual,i}} $$

to create 2 variables for positive and negative surprises (POSSUPRISE and NEGSUPRISE)

Defined Returns as

$$ \text{RETURN}_t=\ln(\text{price}_{i+t})-\ln(\text{price}_i) $$

where $t$ is the final day of the time period I am analyzing

so my current regression looks like this

$$ \text{RETURN}_t = \beta_0 + \beta_p \text{POSSUPRISE}_{i}+\beta_n \text{NEGSUPRISE}_{i}+\epsilon_t $$

I've also done a regression with indicator variables for beating and missing estimates

I've run this over a sample set of 30 large cap stocks with EPS data from 1999-2009 and the appropriate pricing data and so far have had mixed results, I found some correlation between 2 year returns and large earnings surprises, but before I explore this question further, I want to make sure I'm going about it the right way

My questions are:

  1. Is a regression of individual instances the best way to analyze this problem? Should I use time series methods like VAR instead?
  2. What is the best way to incorporate broad market movement into the returns data? Should I just adjust the return variable to account for the return on an index over the time period as well or is there a better solution?
  3. Am I better off just considering the surprise variable or should I try to control for other variables in the model such as actual EPS, Market Cap, etc?

by EHC at October 31, 2014 09:36 AM


Passing arguments to class as a val

When declaring a class in Scala, you can define the parameters as val, like this:

class MathOperations(val _x: Int, val _y: Int) {
  var x: Int = _x
  var y: Int = _y

  def product = x*y

But in this case, when I leave out the val keyword, an instance of the class behaves exactly the same (as far as I can figure out)

What is the difference between declaring the parameters as I did above, and doing it without val, like this:

class MathOperations(_x: Int, _y: Int) {

by Marco Prins at October 31, 2014 09:29 AM

Eclipse Project with Scala Plugin, Maven and Spark

I have Eclipse Kepler, I have installed the Maven and Scala plugins. I create a new Maven project and add the dependency

groupId: org.apache.spark artifactId: spark-core_2.10 version: 1.1.0

as per current doc at, all is fine, the jars for Scala 2.10 are also added to the project. I then add the "Scala Nature" to the project, this adds Scala 2.11 and I end up with the following error

More than one scala library found in the build path (C:/Eclipse/eclipse-jee-kepler-SR2-win32-x86_64/plugins/org.scala-lang.scala-library_2.11.2.v20140721-095018-73fb460c1c.jar, C:/Users/fff/.m2/repository/org/scala-lang/scala-library/2.10.4/scala-library-2.10.4.jar). At least one has an incompatible version. Please update the project build path so it contains only compatible scala libraries.

Is it possible to use Spark (from Maven) and Scala IDE Plugin together? Any ideas on how to fix this problem?

Thanks for your help. Regards

by Francesco at October 31, 2014 09:28 AM


Is a secondary TM sufficient to detect all loops?

Procedure: Start a secondary TM in parallel with the first, but have the second perform exactly 1 step each 2 steps the first TM performs (i.e. it runs at half speed). If the second machine ever reaches the same configuration as the first, a loop was detected (obviously).

Claim: All infinite loops in a TM can be detected in the above manner.

Has this been proven? I'm pretty sure I found this claim in a paper, I believe one of Marxen's, though I can't seem to find it.

by mafu at October 31, 2014 09:27 AM


Prove inductively: $g(n) = g(\log{n}) + n^{1/2} = O(n^{1/2})$ [on hold]

It seems to me that the following recurrence: $g(n) = g(\log{n}) + n^{1/2}$ has a tight upper bound of: $O(n^{1/2})$, however I am not sure how to prove this. Specifically, I would like to find an inductive proof.


by Yechiel Labunskiy at October 31, 2014 09:25 AM


Factoring risk premium in to Forward Rate calculation

This is a self study question. I'm calculating a forward rate.

Specifically, I have that in a country X, the Spot Rate is 5X/1US. I also have that the 1 year interest rate is 13% in country X and inflation is 12%. The US interest rate is 4% with 3% inflation.

I'm computing the forward rate as:

$F= S(1+i_d)/(1+i_f) = 5 *(1+.04)/(1+.13) = 4.602.$

However I'm also told that X's market risk premium is 300 basis points above US treasuries. I'm not sur ehow to factor that in....

by user1357015 at October 31, 2014 09:24 AM

Planet Theory

October 2014 issue of the Bulletin of the EATCS

The October issue of the EATCS Bulletin is now available online at from where you can access the individual contributions separately.

You can download a pdf with the printed version of the whole issue from

The Bulletin of the EATCS is open access, so people who are not members of the EATCS can read it. Let me thank the members of the association who make this service of the community possible with their support.  (EATCS members have access to the member area, which contains news and related articles and provides access to the Springer Reading Room. Young researchers can find announcements of open positions, news and related articles.)

This issue of the bulletin is brimming with interesting content, with five EATCS Columns and a piece by David Woodruff surveying the work for which he had received the EATCS Presburger Award 2014 amongst others. You might also enjoy reading the transcript of a dialogue between Christian Calude and Kurt Mehlhorn about theory, LEDA and Algorithm Engineering. I find it inspiring to read Christian's dialogues with famous members of our community and I always learn something useful from them. (Unfortunately, the lessons I think I learn do not make it often into my work practices. That's the theory-practice divide, I guess :-))

Here are a couple of excerpts to whet your appetite.
  • Kurt's motto, even definition, for Algorithm Engineering is: "Treat programs as first class citizens in algorithms research and not as an afterthought." He also adds that "Algorithm engineering is not only a sub-discipline of algorithms research. More importantly, it is a mind set."
  • CC: How do you manage to juggle between so many jobs in di fferent countries?
    KM: I try to follow some simple principles.
    I avoid multi-tasking. I set aside time for particular tasks and then concentrate on them. For example, when I was writing my 1984 books and the LEDA book, I would work on the book every work day from 8:00 am to 12:00 pm. I would not accept phone calls or interruptions by students during this time. Now, the 8am to 12pm slot is reserved for reading, thinking and writing. The no-interruption rule still holds.
    I clean my desk completely every evening when I leave my o ffice, so that I can start with an empty desk the next morning.
    When I accept a new responsibility, I decide, what I am going to give up for
    it. For example, when I became vice-president of the Max Planck Society in 2002 (for a 6 year term), I resigned as editor of Algorithmica, Information and Computation, SIAM Journal of Computing, Journal of Discrete and Computational Geometry, International Journal of Computational Geometry and Applications, and Computing.
    And most importantly, I am supported by many people in what I do, in particular, my co-workers, my students, and then administrative staff in the institute and the department. Cooperation and delegation are very important.
Enjoy the issue and consider contributing to future ones!

by Luca Aceto ( at October 31, 2014 09:05 AM



Clojure: invoking multiple arity functions

I have a problem invoking the multiple arity function printf on (specifically, System.out).

user=> (.printf System/out (into-array Object ["foo"]))
IllegalArgumentException No matching method found: printf for class
clojure.lang.Reflector.invokeMatchingMethod (

by pmf at October 31, 2014 09:03 AM


Does Nelson-Siegle require adjustments to yield curve input data?

I am attempting to gain a better understanding of the limitations of the Nelson-Siegel model as described in Estimating the Yield Curve Using the Nelson-Siegel Model.

As I have been playing around with the data I started to wonder whether the inputs to the Nelson-Siegel model are correct. I am using Daily Treasury YieldCurve Rates and estimating the model through the R YieldCurve package. It has been my understanding that spot rates need to be derived from observable par yields before applying any modelling. This understanding, I believe, has been confirmed at a separate discussion. But documentation of the relevant R packages fails to mention which rates should be supplied.

Should the input to the Nelson-Siegel model, in general and with respect to the R package, be the Daily Treasury YieldCurve Rates or should one bootstrap the spot rates before applying the model?

by RndmSymbl at October 31, 2014 09:02 AM


Scala parameters for access modifiers?

What is the difference between

class Test {
  private[this] val foo = 0


class Test {
  private val foo = 0

What all can go inside the []? Also, what should I search for when I want to look up the specs of this? I tried Googling various combinations of "scala access modifier arguments/parametrized scala access modifier" and nothing came up.

by wrick at October 31, 2014 08:47 AM


Bulk-loading R-tree with data with extent

When bulk-loading R-tree with points one can simply sort the elements by some coordinate and split to equal-sized chunks. But if the elements have some extent, sorting them by their coordinate value (whether minimum, average or maximum) can still lead to large overlap if there are some large elements involved.

Is there some technique for splitting so that overlap is minimized?

by Jan Hudec at October 31, 2014 08:46 AM


How to transform an HList to another HList with foldRight/foldLeft

This question is derived from my previous question: What does HList#foldLeft() return?

I have this scenario:

class Cursor {

trait Column[T] {
   def read(c: Cursor, index: Int): T

object Columns {
    object readColumn extends Poly2 {
        implicit def a[A, B <: HList] = at[Column[A], (B, Cursor, Int)] { case (col, (values, cursor, index)) ⇒
            (, index) :: values, cursor, index+1)

    def readColumns[A <: HList, B <: HList](c: Cursor, columns: A)(implicit l: RightFolder.Aux[A, (HNil.type, Cursor, Int), readColumn.type, (B, Cursor, Int)]): B =
        columnas.foldRight((HNil, c, 0))(readColumn)._1

This code, tries to read the values of several columns.

If I call readColumns(cursor, new Column[String] :: new Column[Int] :: HNil), I expect to get String :: Int :: HNil.

The readColumns() method compiles ok, but the compiler complains about implicits in concrete invocations.

What is the right way of working?.


Here is the exact error message I'm receiving when invoking with 2 columns:

could not find implicit value for parameter l: 
[Column[String],shapeless.HNil]],(shapeless.HNil.type, android.database.Cursor, Int),readColumn.type,(B, android.database.Cursor, Int)]

Don't know how to help the compiler. :-(


Question: why specify HNil.type in the implicit parameter of readColumns(): RightFolder.Aux[A, (HNil.type, Cursor, Int), readColumn.type, (B, Cursor, Int)] ?

by david.perez at October 31, 2014 08:38 AM

How do I concat/flatten byte arrays

I'm making a function that generates a .wav file. I have the header all set, but I'm running into trouble with the data itself. I have a function for creating a sine wave at 880Hz (at least I think that's what it does, but that's not my question)--the question is, how do I convert a collection of byte arrays to just one byte array with their contents? This was my best try:

(defn lil-endian-bytes [i]
  (let [i (int i)]
    (map #(.byteValue %)
         [(bit-and i 0x000000ff)
          (bit-shift-right (bit-and i 0x0000ff00) 8)
          (bit-shift-right (bit-and i 0x00ff0000) 16)
          (bit-shift-right (bit-and i 0xff000000) 24)]))))

(def leb lil-endian-bytes)

(let [wr (io/output-stream (str name ".wav") :append true)]
  (.write wr
       (byte-array (flatten (concat (map 
         (fn [sample] (leb (* 0xffff (math/sin (+ (* 2 3.14 880) sample)))))
         (range (* duration s-rate)) )))))

but it doesn't do what I want it to do: concat all of the byte-arrays into one vector and then into a single byte array. And it makes sense to me why it can't: it can't concat/flatten a byte[] because it's not a vector; it's a byte[]. And it can't cast a byte[] into a byte. But what do I need to do to get this working?

by tjb1982 at October 31, 2014 08:29 AM


Kleene positive closure - help in proofing this claim

I just started a course called 'Automata and Formal Languages'. I'm having difficulty in proofing\disproofing this equality.

$ (L_{1} \circ L_{2})^{+} = L_{1}^{+} \circ L_{2}^{+} $


$ L_{1} $, $L_{2}$ are Languages.

$\circ$ is the concatenation operation between two languages.

$+$ is the Kleene plus closure defined by $\bigcup _{i = 1}^{\infty }L^{i} $

I tried finding a counter example and also tried to formally proof but had no luck. Can someone please point me in the correct direction?

by elmekiesIsrael at October 31, 2014 08:29 AM


IF implementations in C#

The majority of developers write IF statements in the following way

if (condition)
    //Do something here

of course this considered normal but often can create nested code which is not so elegant and a bit ugly. So the question is: Is possible to convert traditional IF statements to functional ones?

** This is an open question of possible ways to produce more readable code and i prefer not to accept any answer. I believe is better people to choose themselves the best solution for them and vote for the answer they chose.

by Jorge Code at October 31, 2014 08:29 AM

Json4s CustomSerializer proble with List value

I noticed strange issue when was trying to make custom serializer for following json

  "contentServiceId" : 123,
  "contentServiceDisplayName": "Bank",
  "containerInfo" : {
    "containerName": "Container"
  "geographicRegionsServed": [
    { "country": "US" },
    { "country": "FI" }

First I assumed following would work. And it actually works but ONLY IF there are more than one country listed. If there was only one country it would fail because it would try to transform JValue("US") into List("US") which fails, so for some reason Json4s forgets that there is array if array contains only one element.

val contentServiceSerializer = new CustomSerializer[ContentService](implicit format => (
    case c : JValue =>
        id = (c \ "contentServiceId").extract[Long],
        name = (c \ "contentServiceDisplayName").extract[String],
        containerType = (c \ "containerInfo" \ "containerName").extract[String],
        countries = (c \ "geographicRegionsServed" \ "country").extract[List[String]])
  { case _ : Transaction => JNothing } // We do not do code to Yodlee json transformations

But because can't be sure if there is one or more I had to use following. It works but not so pretty.

val contentServiceSerializer = new CustomSerializer[ContentService](implicit format => (
    case c : JValue =>
        id = (c \ "contentServiceId").extract[Long],
        name = (c \ "contentServiceDisplayName").extract[String],
        containerType = (c \ "containerInfo" \ "containerName").extract[String],
        countries = (c \ "geographicRegionsServed") => (region \ "country").extract[String])
  { case _ : Transaction => JNothing } // We do not do code to Yodlee json transformations

So I have working solution but would prefer to have clear code without extra complexities. I would think I could make that parsing block be clearer if had extra custom serializer for country, but because I don't need any extra class there it would be just extra code.

I am just wondering if there is something I just don't see because this feels stupid :)

by Jari Kujansuu at October 31, 2014 08:18 AM

Unable to read variable annotations from Scala macro

I'm trying to write a macro in Scala, which reads variables with a certain annotations to manipulate them, but it seems annotations property of variable symbols always return an empty list.

Annotation signature:

class Inject extends StaticAnnotation

Annotation usage:

object App {
  var service: HttpService = _

Macro definition (blackbox) :

def inject[T <: Config](c: Context)(target: c.Expr[T])(implicit tag: c.WeakTypeTag[T]): c.Expr[ModuleProxy] = {
  import c.universe._

  val fields = tag.tpe.members collect { case s: TermSymbol => s}

  // Always empty.
  fields foreach { s => println(s"$s : ${s.annotations}") }

Is there something I should look into? Any kind of suggestions would be much appreciated.


by mysticfall at October 31, 2014 08:14 AM

Is there any workaround for Scala bug SI-7914 - returning an apply method from a Scala macro?

In our library we have a macro defined like so:

def extractNamed[A]: Any = macro NamedExtractSyntax.extractNamedImpl[A]

This macro uses its type signature to return an apply method whose arguments match the apply method of the case class on which it is typed.

The idea is that this allows the user of our library to make use of Scala's named parameters like so:

lazy val fruitExtractor = extractNamed[Fruit](
  name =,
  juiciness = FruitTable.juiciness

However, this doesn't seem to work - it instead returns error: Any does not take parameters at compile-time.

It seems like this is caused by SI-7914 because it does work if the user explicitly calls apply, i.e.:

lazy val fruitExtractor = extractNamed[Fruit].apply(
  name =,
  juiciness = FruitTable.juiciness

We can work around this by simply renaming the returned method to something sensible, but is there any other way to work around this without requiring an explicit method call?

by David Gregory at October 31, 2014 08:01 AM


Planet Clojure

How we deploy at MixRadio

In this post I’d like to talk about how we deploy our microservices into EC2 here at MixRadio. I’ll spend a bit a time explaining our old process and some of the problems it had and then I’ll go into more detail on the new Clojure tools we created to solve them.

Out with the old…

Deon has described our journey from monolithic, big-bang releases to continuous delivery. This was a change that was undertaken while we were running MixRadio out of our own datacenter.

As we became more comfortable with this new way of working we identified a number of flaws in the way we were deploying and running our services in the datacenter:

  • Snowflake servers - Our servers were more than just an IP address and purpose. Configuration drift would see the servers gradually begin to differ as they lived longer.
  • Provisioning the servers took too long - There were too many manual tasks involved in getting a new server up and running. We had Puppet to cover the basic bootstrapping but still had to enter the server’s details into our deployment tool and load balancers.
  • Deployment time - As the number of instances increased, the time it took to deploy increased linearly. Not a desirable property if you want to maintain the ability to make frequent changes to a high-scale service.
  • Configuration was murky - Changes made to configuration were tracked but it was unclear exactly what had changed and in the event of a rollback being required we would frequently find properties were not reverted.

Old deployment process

Our existing deployment process was the result of our transition to continuous delivery. The application which carried out deployments was created in-house and was beginning to show its age. We created it at a time when we were deploying a small number of applications to a couple of servers each for high-availability.

To make a change to our live platform you would log into a website, enter the version of the service you were deploying, perhaps amend the configuration and kick off your deployment. The deployment tooling would go through each server hosting the service, in turn. It would use SSH to remove the old version of the application and then the new version, with new configuration, would be installed. This process is shown below for an example four server deployment:

As we needed to deploy to more servers to handle additional load we found that this approach to deployment was slowing us down. In the event of failure, this method of deploying would also require linear time to perform the same operations in reverse which is bad if a deployment has caused things to go haywire.

We’d been steadily increasing the number of microservices we were running and the existing process wouldn’t allow the deployment of more than one service to an environment at a time. This was a self-imposed restriction we’d chosen back when we were starting out with continuous delivery because we weren’t happy with more than one piece of the platform changing at a time. We felt that it would make it difficult to determine where any regression or failure had come from if more than one thing was in-flight at a given time. Since that decision was made we’d become more adept at monitoring our services and the platform as a whole so we were keen to see how we would get on without enforcing that restriction in the new world.

… in with the new

Last year, we knew that we wanted to migrate out of the datacenter and into AWS. It was a good time for us to change our deployment process and we had a vague idea of what it might look like from reading about other teams’ tools.

We knew some of the drawbacks of our current process but wanted to make sure we avoided making the new tools painful as well. We used a large whiteboard in the office to let people put up suggestions for functionality the tools should have or things we should consider in the overall design.

After about a month a group of developers got together and went through the suggestions. They were prioritised and became the goals for the team developing our new tooling. We wanted to begin migrating services to EC2 as soon as possible but had to balance that with making sure everything was working smoothly. We decided that the easiest way to get the tooling up and running was to attempt to do everything required to deploy:

  • A skeleton service which represented a typical service with no dependencies.
  • An actual service which had dependencies on other services already running in our existing datacenter (this was important because we knew that we weren’t going to be doing a big-bang migration).
  • The services which formed the tooling itself.

It was felt these would allow us to dogfood to a point where we were comfortable everything was working safely and the process reflected what we’d like to be using as developers. From there we would be able to open up the tooling to other developers who could begin the migration.

We had six services which we’d need to create to form the tooling and provide the experience we were looking for:

  1. Metadata
  2. Baking
  3. Configuration
  4. Infrastructure management
  5. Deployment orchestrator
  6. Command-line tool


We had multiple copies of what was essentially the same list of services: we had one in our logging interface, one in the deployment application and others which all had to be kept up-to-date. We wanted to create a service which just provided that list and meant that anyone who wanted to iterate over the services (including our own tooling) could do so from one canonical source. We also realised that we could attach arbitrary metadata to those service descriptions which would allow us to answer frequent questions like ‘who should I go to with questions about service X?’ and ‘where can I find the CI build for service Y?’.

We created a RESTful Clojure service which exposes JSON metadata for our services. The metadata for each application can be retrieved individually and edited one property at a time. The minimal output for an application ends up looking something like this:

  "name": "search",
  "metadata": {
    "contact": "",
    "description": "Searching all the things"

The important thing here is that there’s no schema to the metadata. We have a few properties which are widely used throughout the core tooling but anyone is free to add anything else and do something useful with it.


Having seen the awesome work the guys at Netflix have done with their tooling we knew that we liked the idea of creating a machine image for each version of a service rather than setting up an instance and then repeatedly upgrading it over time. We already knew we had a problem with configuration drift and using an image-based approach would alleviate a lot of our problems with snowflake servers. Even if someone had made changes to an individual instance they would be lost whenever a new deployment happened or the instance was replaced. This pushes people towards making every change repeatable.

We were aware of Neflix’s Aminator which had just been released. However we had a few restrictions around authentication that made it difficult to use and wanted a little more flexibility than Aminator provided.

Our baking service is written in Clojure and shells out to Packer which handles the interactions with AWS and running commands on the box. We split our baking into two parts to ensure that baking a service is as fast as possible. The first part takes a plain image and installs anything common across all of our images. This is run once a week automatically, or on demand when necessary. The second part, which is much quicker, installs the service ready to be deployed.


To handle differing configuration across our environments we created a service to store this information. We wanted to provide auditing capabilities to see how configuration had changed and have confidence that when we rolled back, we were reverting to the correct settings.

We were busily planning away and thinking about how to solve the problem of concurrent updates to configuration and how to store the content when we realised that we actually already had the basics for this service on every developer’s machine. We’d veered dangerously close to attempting to write our own Git. It (thankfully) struck us that we could build a RESTful service (in Clojure, of course) which exposed the information within a Git repository. Developers wouldn’t make changes to the configuration via this service, it would be read-only. They would use the tools they’re familiar with from the comfort of their own machine to commit changes. Conflicts and merges would then be handled by software written by people far cleverer than us and auditing is as simple as showing the Git log.

For each service and environment combination we have a Git repository containing three files which allow developers to control what’s going to happen when they deploy:

  • application-properties.json - The JSON object in this file gets converted to a Java properties file and the service will use this for its configuration.
  • deployment-params.json - This file controls how many instances we want to launch, what type of instance they’ll be etc.
  • launch-data.json - This file contains an array of shell command strings which will be executed after the instance has been launched. This functionality doesn’t tend to get used for most services, but has allowed us to automatically create RAID disks from ephemeral storage or enable log-shipping only in certain environments.

We originally thought that we could just grab the configuration, based on its commit hash, from the configuration service during instance start-up. However, we realised that our configuration service could be down at that point, or the network could be flakey. That’s not a huge problem if someone is actively deploying but if the same situation occurs in the middle of the night when a box dies, we need to know there is very little that can stop that instance from launching successfully. For this reason the configuration file and any scripts which run at launch (which are likely to differ from environment to environment) are created at launch time from user-data by cloud-init. User-data is part of the launch configuration associated with the auto scaling group and is obtained from the web-server which runs on every EC2 instance, making it a reliable place to keep that data. This method means that our service image can be baked without needing to know which environment it will eventually be deployed to, preventing us from having to bake a separate image for each environment.

Infrastructure management

The AWS console is an awesome tool (and keeps getting better) but it’s not the way we really wanted to have people configuring their infrastructure. By infrastructure we mean things which are going to last longer than the instances which a service uses (for example: load balancers, IAM roles and their policies etc.). We’ve already blogged about this service, but the basic idea is that we like our infrastructure configuration to be version-controlled and the changes made by machines.

Deployment orchestrator

When we started developing our tooling we were pretty green in terms of AWS knowledge and were, in some ways, struggling to know where to start. We knew about Netflix’s Asgard and the deployment model it encouraged made perfect sense as the base for our move into AWS.

We started using Asgard but found that our needs were sufficiently different that we ended up moving away from it and creating something similar. I’ll run through our initial use of Asgard before describing what we came up with.

Red-black deployment

While we’re on the subject of Asgard’s deployment model I’ll describe red-black (or green-blue, red-white, a-b) deployment in case anyone isn’t familiar with it. As shown in our existing deployment model above, we had a problem with the linear increase in deployment (and, perhaps more importantly, rollback) time. We also didn’t like the idea that, during a deployment, we’d be decreasing capacity while we switched out and upgraded each of the instances. A number of our services run on two instances not due to load requirements but merely to provide high-availability so a deployment of these services would result in the traffic split going from 50% on each instance to 100% on one instance. At this point, if anything happens to that single instance the service would be unavailable.

The red-black deployment model does a good job of solving these issues while also simplifying the logic required to make the deployment. Here’s our previous four server deployment in red-black style:

The main benefits of this deployment model are:

  • Capacity is temporarily increased during deployment, rather than reduced.
  • There are opportunities to pause the deployment process and evaluate the new version of the service under live load for any ‘unforeseen consequences’.
  • Rollback is as simple as allowing traffic back onto the old version of the service and preventing traffic to the new version. How long we keep those instances alive after deployment is up to us.
  • The unit of deployment we’re dealing with is an auto scaling group which encourages their use for any deployment whether to single or multiple instances. This pushes us to automate enough that if an instance is being troublesome (or if it simply dies during the night) we have the confidence that it will be terminated and another will take its place.

Back to the orchestration service…

This service is the way developers kick-off their deployments. It would then defer to the Asgard APIs to monitor the progress of the deployment and, since Asgard doesn’t have long-term storage, store deployment information so we could track what we’d been up to.

We had originally intended for the service to use Asgard’s functionality to automatically move through the steps of a deployment but found that because we weren’t using Eureka, we needed to be able to get in between those steps and check the health of our services. So the orchestration service was written to operate as a mixture of Asgard actions and custom code which performed ‘in-between’ actions for us.

A deployment consisted of six actions:

  • create-asg - Use Asgard to create an auto scaling group.
  • wait-for-instance-health - Once the auto scaling group is up and running hit the instances with an HTTP GET and wait for a healthcheck to come back successfully.
  • enable-asg - Use Asgard to enable traffic to the newly-created auto scaling group.
  • wait-for-elb-health - An optional step, wait until the instances from the newly-created auto scaling group are shown as healthy in the load balancer.
  • disable-asg - Use Asgard to disable traffic on the old auto scaling group.
  • delete-asg - Use Asgard to delete the old auto scaling group and terminate its instances.

To us, a deployment is a combination of an AMI and the commit hash from the configuration Git repository for the chosen service and environment. During the creation of the auto scaling group our deployment tooling will create the relevant user-data for the commit hash. If we use the same combination of AMI and commit hash we will get the same code running against the same configuration. This is a vast improvement on our old tooling where we’d have to manually check we’d reset the configuration to the old values.

As we started migrating more services out of the datacenter we found that we wanted more customisation of the deployment process than Asgard provided. We were already running a version of Asgard which had been customised in places and were finding it difficult to keep it up-to-date while maintaining our changes. We made the decision to recreate the deployment process for ourselves and keep Asgard as a very handy window into the state of our AWS accounts.

We stuck to the same naming-conventions as Asgard, which meant that we could still use it to display our information, but recreated the entire deployment process using Clojure. It wasn’t an easy decision to make but it was considered valuable to us to have complete control over our deployment process without pestering the guys at Netflix to merge pull-requests for functionality which was probably useful only to us.

We’re really happy with our Asgard-esque clone. We broke the existing six actions down into smaller pieces and a deployment now runs through over fifty actions for each deployment.

A deployment is still fundamentally the same as before:

  • Grab the configuration data for the service and environment we’re deploying it to.
  • Generate user-data which will create the necessary environment-specific configuration on the instances.
  • Create a launch configuration and auto scaling group.
  • Wait for the instances to start.
  • Make sure the instances are healthy.
  • Start sending traffic to the new instances.
  • Once we’re happy with the result the old auto scaling group is deactivated and deleted.

The only difference is that we’re now able to control the ordering of actions at a fine-grained level and quickly introduce new actions when they’re required.

Once a deployment has started the work is all done by the orchestration service as it moves through the actions. A human can only step in to pause a deployment and undo it if there are problems. For a typical deployment a single command will kick off the deployment and the developer can watch the new version being rolled out followed by the old version being cleaned up. Undoing a deployment consists of running the same list of deployment actions, but recreating the old state of the service rather than the new.

Command-line tool

In our whiteboard exercise, people had expressed a preference for CLI-driven deployment tooling which could be used in scripts, chained together etc. so we wanted to prioritise the command-line as the primary method for deploying services, with a web-interface only created for read-only display of information.

We love Hubot and love Hubot in Campfire even more, so we created a command-line application, written in Go, which has become the primary method for developers to carry out a large number of the tasks required when building, deploying and managing our services. We’ve written ourselves a Hubot plugin which allows us to use the command-line application from Campfire, which means that I can kick off that bake I forgot to start before I went to lunch while I’m standing in the queue.

The choice of Go for the tool was an interesting one. We make no secret that we’re big fans of Clojure here at MixRadio but the JVM has a start-up cost which isn’t suitable for a command-line tool that is being used for quick-fire commands. It was a shoot-out between Go, Node.js and Python. In the end Go won because it starts up quickly, can produce a self-contained binary for any platform (so no fiddling with pip or npm) and we wanted to have a go at something new.

Now, the typical workflow for deploying changes to a service looks like this:

  # Make awesome changes to a service and commit them
$ klink build search
  ... Release output tells us we've built version 3.14 ...
  # Bake a new image
$ klink bake search 3.14
  ... Baking output tells us that our AMI is ami-deadbeef ...
  # Deploy the new image and configuration to the 'prod' environment
$ klink deploy search prod ami-deadbeef -m "Making everything faster and better"
  ... Deployment output tells us it was successful ...
  # Profit!


Hopefully this overview has given a taste of how we manage deployments here at MixRadio. If there’s anything which unclear then please get in touch with us via the comments below or on Twitter. We value any feedback people are willing to give.

We’re proud of our deployment tooling and are grateful to those who have provided inspiration and the code on which we’ve built. We’re currently going through the process of getting our tooling ready to open-source and hope to have something ready by the end of 2014 so keep an eye on this blog and our Github account for more details.

The author

Neil Prosser works on the team responsible for the deployment tooling our developers use to get features deployed and in the hands of our users. He joined MixRadio six years ago as a .NET developer, transitioning to Java and then Clojure as we evolved.

by MixRadio at October 31, 2014 08:00 AM


Ternary (and beyond) computation and quantum computing?

Binary math is at the heart of most computing, in large part because of the ease with which two energy states can be achieved. I have always thought that having more states could improve computing power (e.g. using a trit instead of a bit), but there is a seeming lack of attention being paid to the problem.

Some work has been done in the quantum computing area using qubits to achieve more than the normal two energy levels (see, for example, the recent effort at UCSB).

What advances have been made toward having ternary (or greater) computers, and what are the primary the implications of the extra states?

by Shane at October 31, 2014 07:59 AM


How do I get systemd to start my ZFS mount script before doing anything else?

Right now I'm using zfsonlinux on Fedora 19, and it works excellently. Mounting works at bootup as planned using the initscript (included with zfs) that systemd calls, but there's a pretty significant flaw: ZFS is mounted way, WAY too late during boot, to the point that transmission and other services get very lost without /zfs mounted and can't recover, hence failing them.

What I'm after is a way to get systemd to fire up the script before doing anything else. I know how to do it using initscripts (just set it to a lower runlevel, say 2), but I haven't the foggiest when it comes to doing it with systemd's target system.

I've tried setting things in the services that require zfs to be up, but it doesn't work, barring Type=idle (but there must be a better way of doing it regardless):

And for reference's sake, here's the initscript provided by Fedora. Due to the nature of how ZFS mounts on linux (?), I can't just plop an entry into fstab and be done with it, it has to be mounted using commands.

by luaduck at October 31, 2014 07:55 AM


Scala: Why foldLeft can't work for an concat of two list?

Defining a concat function as below with foldRight can concat list correctly

def concat[T](xs: List[T], ys: List[T]): List[T] = (xs foldRight(ys))(_ :: _)

but doing so with foldLeft

def concat1[T](xs: List[T], ys: List[T]): List[T] = (xs foldLeft(ys))(_ :: _)

results in an compilation error value :: is not a member of type parameter T, need help in understanding this difference.


Just in case someone might be looking for a detailed explanation on folds

by Somasundaram Sekar at October 31, 2014 07:45 AM



Calling collect() on Spark custom RDD resulting in an empty collection

For the following RDD (relevant sections shown):

val myRdd = new RDD[RddOutput](zippedRows) {

  override def compute(split: Partition, context: TaskContext): Iterator[RddOutput] = {
      val out =  // computes a list of items
    out.toIterator  // Breakpoint set here: out is non-empty


When invoking the rdd:

val outVects = myRdd.collect
val veclen = outVects(0).size    // outVects is empty!

So as the comments note: the output iterator withint the compute() is non-empty but then there is no data returned from the collect() invocation. Any ideas?

by javadba at October 31, 2014 07:31 AM


Normal vol - convention

apologies for the simplicity of the question, but I was wondering: what is the quoting convention for normal (bps) volatility?

Say I have the following time series of data:

Date Close Abs Change 20-Oct 1000.00 21-Oct 1003.53 3.53 22-Oct 1004.79 1.25 23-Oct 1009.88 5.09 24-Oct 1002.02 -7.86 25-Oct 1005.96 3.94 26-Oct 1004.96 -1.00 27-Oct 1008.30 3.33 28-Oct 1002.18 -6.12 29-Oct 1004.95 2.77 30-Oct 1000.95 -4.00 31-Oct 1008.19 7.24

It follows that the 10day realized (normal) vol (calculated as stdev of the abs changes) is 6.34, and the annualized vol is 100.75 (using an annualization factor of 252).

My question is: what units are these vol levels of 6.34 & 100.75 in? if the underlying was a stock index, would it be index pts (or in case of FX: pips/Big figures)?

Thanks in advance

by tyler_durden at October 31, 2014 07:18 AM


How to integrate maven and play framework

I had been learning play framework now and has problems with integrating with maven... I already have a web app in maven, and want to add play features to the same as a module... Can I create a maven module and just add dependencies of play2 to the pom and start using the play framework... Also how can I build and package this as a war since play app has different project structure... Do we have any ant, or maven plugin to build the play apps as war...

Please somebody guide me how to proceed...

by sr.praneeth at October 31, 2014 06:55 AM

How to change time of scheduled jobs for every child actor akka scala?

I'm trying to send more messages to clients with schedule at the same time. In every message has some destinations. I've created actor for each message not for destination and this actor sending messages to destinations.

Here is the deal: - if i'll edit or delete it and it does not sent yet. Suppose, in this situation schedule time was changed. I need to stop this schedule, then need to start with new time.

Here is the pieces of my code:

for(scheduleMsg <- scheduleMsgs) {
  val difference = scheduleMsg.scheduleTime.getTime - System.currentTimeMillis
  val childActor = if (lastId == {
  } else {
    lastId =
    lastActor = newChildActor(lastId)
  cancellable(difference, childActor, msg)

def cancellable(difference: Long, childActor: ActorRef, msg: Message) = {
  actorSystem.scheduler.scheduleOnce(difference milliseconds, childActor, msg)


I don't want to use mutable List for saving cancellables.

How can I find concurrently child actor by name?
How to call cancel() method for this child

by Khwarezm Shah at October 31, 2014 06:08 AM

How to link to JDK?

I want to have links like

  • Javadoc {@link Object#toString()}
  • Scaladoc [[Object#toString()]]

How can I do this with scaladoc?

I assume I'll have to provide the external URL to link to.

by Paul Draper at October 31, 2014 06:05 AM


Graph (Forest) representation that supports edge deletion and efficient traversal

I am trying to write a data structure that given a general tree (or forest) will support the following operations:

  1. Edge deletion
  2. Connected(u,v) queries

This problem is addressed in section two of the following ACM journal article: "An On-Line Edge-Deletion Problem". The idea given claims to be able to carry out q edge deletions in O(q + |V|log|V|) time, while allowing constant time Connected(u,v) queries. The idea being: to maintain a table mapping each vertex to a connected component. Upon each deletion, each of the new trees is scanned in parallel. Which ever tree is finished being scanned first - becomes a new component. Now my question is, which graph representation can I use to implement their idea? On one hand I need to be able to delete an edge without having to scan an O(|V|) adjacency list, on the other hand, I need to be able to run a traversal (DFS) in O(|E|) = O(|V| (tree) time which can't happen using a matrix.

by Yechiel Labunskiy at October 31, 2014 06:02 AM

Lamer News


Scala : Does variable type inference affect performance?

In Scala, you can declare a variable by specifying the type, like this: (method 1)

var x : String = "Hello World"

or you can let Scala automatically detect the variable type (method 2)

var x = "Hello World"

Why would you use method 1? Does it have a performance benefit?
And once the variable has been declared, will it behave exactly the same in all situations wether it has been declared by method 1 or method 2?

by Marco Prins at October 31, 2014 05:57 AM




Residual maturity vol

The following question is probably (from a practical point of view) more relevant for EM markets which typically exhibit a more pronounced forward volatility compared to spot volatility.

Say I buy a 1 year option today; the exposure it confers today is to the realized vol of the 1yr Forward (i.e. I am long the gamma of the 1 year forward). However, three months from now, my exposure will be to the realized volatility of the 9moth forward; and 6months from now it will be to the realized volatility of the 6months forward etc... So really, at any point t in [0:T], my gamma exposure is to the realized volatility of the residual maturity forward (Sigma[t:T]). In several EM markets, one finds that the realized volatility tends to increase with the tenor of the forward (i.e. Vol of the 1yr Fwd > Vol 9mth Fwd > Vol 6mth Fwd etc)

The point is that the gamma/vol exposure that a 1-year option confers during its lifetime should be sort of an average of the vols of the 1mth, 3mth, 6mth, 9mth, 1yr Forwards...(as opposed to e.g. the realized vol of the 1yr constant maturity forward only).

I was wondering what the best way would be to calculate the residual maturity vol in this case from historical data

by tyler_durden at October 31, 2014 05:14 AM


How can I see that OCaml is a functional language? [on hold]

Basically, functional language means that everything is a function. The output of the function only depends on the input. So, how can I see that a programming language like OCaml is a functional language? May I have some examples?

Thanks a lot.

by dykw at October 31, 2014 05:13 AM


[Advice] Algorithms and Data Structures

Hello, /r/compsci

I am a third year computer science student my course work in obviously very theoretical based. I am taking my Intermediate/Advanced algorithms course next semester, which most of my course work builds off of after that course. Although, sadly, the professor who is teaching the course is senile (The department does not have enough money to hire anyone) so I probably won't learn very much. Anyways, I am looking for recommendations of books that encompasses algorithms and data structures from an intermediate to advanced level. To give you an idea of where I am: I have read Algorithms, 4th Edition and understand the material pretty well. In closing, what would the great world of reddit recommend next?

TL;DR - I am looking to continue my knowledge in algorithms and data structures and I have read Algorithms, 4th Edition. What should I read next?

submitted by lilred181
[link] [6 comments]

October 31, 2014 04:56 AM


zmq: load balancing broker doesnt work when clients and workers are separated out?

from __future__ import print_function

import threading
import time
import zmq


def worker_thread(worker_url, context, i):
    """ Worker using REQ socket to do LRU routing """

    socket = context.socket(zmq.REQ)

    # Set the worker identity
    socket.identity = (u"Worker-%d" % (i)).encode('ascii')


    # Tell the borker we are ready for work

        while True:

            address = socket.recv()
            empty = socket.recv()
            request = socket.recv()

            print("%s: %s\n" % (socket.identity.decode('ascii'), request.decode('ascii')), end='')

            socket.send(address, zmq.SNDMORE)
            socket.send(b"", zmq.SNDMORE)

    except zmq.ContextTerminated:
        # context terminated so quit silently

def client_thread(client_url, context, i):
    """ Basic request-reply client using REQ socket """

    socket = context.socket(zmq.REQ)

    socket.identity = (u"Client-%d" % (i)).encode('ascii')


    #  Send request, get reply
    reply = socket.recv()

    print("%s: %s\n" % (socket.identity.decode('ascii'), reply.decode('ascii')), end='')

def main():
    """ main method """

    url_worker = "inproc://workers"
    url_client = "inproc://clients"
    client_nbr = NBR_CLIENTS

    # Prepare our context and sockets
    context = zmq.Context()
    frontend = context.socket(zmq.ROUTER)
    backend = context.socket(zmq.ROUTER)

    # create workers and clients threads
    for i in range(NBR_WORKERS):
        thread = threading.Thread(target=worker_thread, args=(url_worker, context, i, ))

    for i in range(NBR_CLIENTS):
        thread_c = threading.Thread(target=client_thread, args=(url_client, context, i, ))

    # Logic of LRU loop
    # - Poll backend always, frontend only if 1+ worker ready
    # - If worker replies, queue worker as ready and forward reply
    # to client if necessary
    # - If client requests, pop next worker and send request to it

    # Queue of available workers
    available_workers = 0
    workers_list      = []

    # init poller
    poller = zmq.Poller()

    # Always poll for worker activity on backend
    poller.register(backend, zmq.POLLIN)

    # Poll front-end only if we have available workers
    poller.register(frontend, zmq.POLLIN)

    while True:

        socks = dict(poller.poll())

        # Handle worker activity on backend
        if (backend in socks and socks[backend] == zmq.POLLIN):

            # Queue worker address for LRU routing
            worker_addr  = backend.recv()

            assert available_workers < NBR_WORKERS

            # add worker back to the list of workers
            available_workers += 1

            #   Second frame is empty
            empty = backend.recv()
            assert empty == b""

            # Third frame is READY or else a client reply address
            client_addr = backend.recv()

            # If client reply, send rest back to frontend
            if client_addr != b"READY":

                # Following frame is empty
                empty = backend.recv()
                assert empty == b""

                reply = backend.recv()

                frontend.send(client_addr, zmq.SNDMORE)
                frontend.send(b"", zmq.SNDMORE)

                client_nbr -= 1

                if client_nbr == 0:
                    break  # Exit after N messages

        # poll on frontend only if workers are available
        if available_workers > 0:

            if (frontend in socks and socks[frontend] == zmq.POLLIN):
                # Now get next client request, route to LRU worker
                # Client request is [address][empty][request]
                client_addr = frontend.recv()

                empty = frontend.recv()
                assert empty == b""

                request = frontend.recv()

                #  Dequeue and drop the next worker address
                available_workers -= 1
                worker_id = workers_list.pop()

                backend.send(worker_id, zmq.SNDMORE)
                backend.send(b"", zmq.SNDMORE)
                backend.send(client_addr, zmq.SNDMORE)
                backend.send(b"", zmq.SNDMORE)

    # Out of infinite loop: do some housekeeping


if __name__ == "__main__":

The above works as is.{}: ./

    Worker-1: HELLO
    Worker-0: HELLO
    Worker-2: HELLO
    Client-0: OK
    Client-1: OK
    Worker-1: HELLO
    Worker-0: HELLO
    Client-2: OK
    Worker-2: HELLO
    Client-5: OK
    Client-4: OK
    Worker-1: HELLO
    Worker-2: HELLO
    Client-3: OK
    Client-6: OK
    Worker-0: HELLO
    Worker-1: HELLO
    Client-7: OK
    Client-8: OK
    Client-9: OK

I wanted to separated out the logic into three py files
one for worker
one for client
one for broker def do_lb(): url_worker = "inproc://workers" url_client = "inproc://clients" client_nbr = NBR_CLIENTS

# Prepare our context and sockets
context = zmq.Context()
frontend = context.socket(zmq.ROUTER)
backend = context.socket(zmq.ROUTER)

# create workers and clients threads
#raw_input("Press Enter to continue...")
#print 'continuing...'

# Logic of LRU loop
# - Poll backend always, frontend only if 1+ worker ready
# - If worker replies, queue worker as ready and forward reply
# to client if necessary
# - If client requests, pop next worker and send request to it

# Queue of available workers
available_workers = 0
workers_list      = []

# init poller
poller = zmq.Poller()

# Always poll for worker activity on backend
poller.register(backend, zmq.POLLIN)

# Poll front-end only if we have available workers
poller.register(frontend, zmq.POLLIN)

while True:
    print 'in while true loop'
    socks = dict(poller.poll())

    print 'out1'
    # Handle worker activity on backend
    if (backend in socks and socks[backend] == zmq.POLLIN):
        print 'out11'
        # Queue worker address for LRU routing
        worker_addr  = backend.recv()
        assert available_workers < NBR_WORKERS

        # add worker back to the list of workers
        available_workers += 1

        print 'out12'
        #   Second frame is empty
        empty = backend.recv()
        assert empty == b""

        print 'out13'
        # Third frame is READY or else a client reply address
        client_addr = backend.recv()

        # If client reply, send rest back to frontend
        if client_addr != b"READY":

            # Following frame is empty
            empty = backend.recv()
            assert empty == b""

            reply = backend.recv()

            frontend.send(client_addr, zmq.SNDMORE)
            frontend.send(b"", zmq.SNDMORE)

            client_nbr -= 1

            if client_nbr == 0:
                break  # Exit after N messages

    print 'out2'
    # poll on frontend only if workers are available
    if available_workers > 0:
        print 'workers are available'
        if (frontend in socks and socks[frontend] == zmq.POLLIN):
            # Now get next client request, route to LRU worker
            # Client request is [address][empty][request]
            client_addr = frontend.recv()

            empty = frontend.recv()
            assert empty == b""

            request = frontend.recv()

            #  Dequeue and drop the next worker address
            available_workers -= 1
            worker_id = workers_list.pop()

            backend.send(worker_id, zmq.SNDMORE)
            backend.send(b"", zmq.SNDMORE)
            backend.send(client_addr, zmq.SNDMORE)
            backend.send(b"", zmq.SNDMORE)
        print 'workers are not available'

# Out of infinite loop: do some housekeeping


import random
import sys
import threading
import time

import zmq

# using inproc
def worker_thread(context, i):
    """ Worker using REQ socket to do LRU routing """

    print 'creating worker %s' % i
    worker_url = "inproc://workers"
    socket = context.socket(zmq.REQ)
    # Set the worker identity
    socket.identity = (u"Worker-%d" % (i)).encode('ascii')

    # Tell the borker we are ready for work

        while True:
            print 'in a while true loop, waiting...'
            address = socket.recv()
            empty = socket.recv()
            request = socket.recv()

            print("%s: %s\n" % (socket.identity.decode('ascii'), request.decode('ascii')))

            socket.send(address, zmq.SNDMORE)
            socket.send(b"", zmq.SNDMORE)

    except zmq.ContextTerminated:
        # context terminated so quit silently


import random
import sys
import threading
import time

import zmq

#using inproc
def client_thread(context, i):
    """ Basic request-reply client using REQ socket """

    print 'creating client %s' % i
    client_url = "inproc://clients"
    socket = context.socket(zmq.REQ)
    socket.identity = (u"Client-%d" % (i)).encode('ascii')

    #  Send request, get reply
    print 'sending hello'
    reply = socket.recv()

    print("%s: %s\n" % (socket.identity.decode('ascii'), reply.decode('ascii')))

I started first and then and then and they just hang and doing nothing.

Anyone know why?

by ealeon at October 31, 2014 04:27 AM

How to deserialize Flume's Avro events coming to Spark?

I have Flume Avro sink and SparkStreaming program that read the sink. CDH 5.1 , Flume 1.5.0 , Spark 1.0 , using Scala as program lang on Spark

i was able to make the Spark example and count the Flume Avro Events.

however i was not able to De serialize the Flume Avro Event into string\text and then parse the structure row.

Does anyone have an example of how to do so using Scala?

by Yoni Darash at October 31, 2014 04:17 AM

Enter file in which to save the key

When generating an SSH key with OpenBSD, we are asked to enter a file in which to save the key.

ssh-keygen -t rsa -C ""
Generating public/private rsa key pair.
Enter file in which to save the key (/c/Users/TheUser/.ssh/id_rsa):

From reading the OpenBSD manual pages, I understand that the file we enter will store the private key and another file with a .pub extension will store the public key.

Normally this program generates the key and asks for a file in which to store the private key. The public key is stored in a file with the same name but “.pub” appended.

The GitHub pages on Generating SSH Keys says that we should just press enter to continue here. My sense is that means we'll just use a default file, which I assume is in the parentheses, for example: (/c/Users/TheUser/.ssh/id_rsa).

Is what I wrote above correct? Also, what are the implications of actually entering a file in which to save the key rather than just pressing enter as GitHub suggests? While I'm pretty sure that id_rsa is just the default, and that it can be anything, I would like to know the conventions.

by Shaun Luttin at October 31, 2014 04:14 AM

Showing JavaDoc/ScalaDoc for Maven/Sbt managed projects in IntelliJ IDEA

I'm using IntelliJ IDEA 13.1 Ultimate. For not Maven-managed or Sbt-managed Java/Scala projects, autopop documentation showed up fine. But for Maven/Sbt managed projects, JavaDoc/ScalaDoc does not show up. The dialog box will appear but there's nothing inside.

by Tongfei Chen at October 31, 2014 04:03 AM



Are imports and conditionals in Play's routes file possible?

I know that the earlier versions of Play used to support routes and conditionals (if blocks) in the routes file but I cannot find any such documentation for Play 2.2.x and HTTP routing says nothing about such a feature.

I want to replace this:

GET /api/users/:id com.corporate.project.controllers.UserController.get(id)

with a shorter version using import as follows:

import com.corporate.project.controllers._ 

GET /api/users/:id UserController.get(id)

Also, is it possible to have conditionals in the routes file? e.g.

if Play.isDev(Play.current())
  GET /displayConfig   DebugController.displayServerConfigs()

by wrick at October 31, 2014 03:53 AM

Twitter Finagle client: How to make external REST api call?

I'm trying to make an external (to finagle server) REST GET request in my finagle code, the URI for is:

I'm using the client code found in the example at :

My code (written in Scala) looks like the following, but the it just hangs even though I set a timeout limit:

val client: Service[HttpRequest, HttpResponse] = ClientBuilder()
  .hosts("") // If >1 host, client does simple load-balancing

val req = new DefaultHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, "")

val f = client(req) // Client, send the request

// Handle the response:
f onSuccess { res =>
  println("got response", res)
} onFailure { exc =>
  println("failed :-(", exc)

I suspect my hosts param is wrong? But what am I suppose to put in there is this is an call to an external REST service?

by pl0u at October 31, 2014 03:47 AM

Specify arity using only or except when importing function on Elixir

I'm studying Elixir and when I use only or except operators when importing functions from a module I need to specify an arity number. Why?


import :math, only: [sqrt: 1]


import :math, except: [sin: 1, cos: 1]

by Guilherme Carlos at October 31, 2014 03:35 AM


Why do cs students from the same school don't know the same number of languages?

At work, I know three from ucsd's cse program. Two have already graduated (a quarter apart), while one is an intern with 1-2 more quarters.

When I ask them what languages they know, they each answered different things. I'd assume they'd take the same core classes, so they should know the same languages.

Or am I misunderstanding cs classes?

submitted by basselksolis
[link] [5 comments]

October 31, 2014 03:25 AM


Fluentd can't upload buffer file to s3 when every hour on the hour

This is my fluentd log folder this is my fluend conf file

Hi everyone: I use fluentd collect my log and these log are generated by my python program, so use forward as the source parameter. But the file that be generated on every hour on the hour will be leaved on the machine which the fluentd running and sometimes there will accumulate some files that can't upload s3. When I restart the fluentd these files except that generated on every hour on the hour will be continued uploaded to s3, that' fine, but how can I deal with these unuploaded files? I searched Google, but can't find a good method, who can help me, thank you!

================= Update If possible I want to deal with the problem of the accumulating of files, why they don't upload to s3, I use AWS's EC2 and S3 in same region and same VPC.

by magigo at October 31, 2014 03:25 AM

How would you get a command to run on startup as a non-root user, on BeagleBone Black?

I have generally speaking followed the instructions here, and tried using su - <myuser> -c "the command" within the service script there. However, I'm trying to run a Clojure application via Leiningen, and it seems that neither lein can't be found by the process. I can use something like su - <myuser> -c "/path/to/lein run ...", but then I get an error that java isn't found.

How do I get this command to run such that it has access to my environment?

by metasoarous at October 31, 2014 02:47 AM

get all keys of play.api.libs.json.JsValue

I have to store play.api.libs.json.JsValue keys to a List. How i do this?

var str = ???                        //json String
val json: JsValue = Json.parse(str)
val data=json.\("data")
println(data)                       //[{"3":"4"},{"5":"2"},{"4":"5"},{"2":"3"}]
val newIndexes=List[Long]()



by Govind Singh Nagarkoti at October 31, 2014 02:44 AM


Linear congruential generator with uniform distribution

I am currently studying linear congruential generators, and there was an example in which I didn't get the code:

public class Random {

static final int a = 48271;
static final int p = 2147483647; //2^31 - 1
static final int q = p/a;
static final int r = p%a;
int state;

public Random() {
    this ( (int)(System.currentTimeMillis()%Integer.MAX_VALUE ));

public Random(int initialValue) {
    if (initialValue < 0) 
        initialValue += p;
    state = initialValue;
    if (state == 0) 
        state = 1;

public int randomInt() {
    int tmp = a * (state % q) - r * (state/q);  //line I don't get
    if (tmp < 0) 
        state = tmp+p;
        state = tmp;
    return state;

The teacher said that it was so that all numbers can be expressed in 32 bits (or somethig like that). The thing is that a LCG should have a period of exactly $p-1$ and I don't get why it should be the case with this line.

Could you also explain why the following code would not work?

public int randomInt() {
    int tmp = (a*state)%p;
    if (tmp <= 0) state = tmp+p;
    else state = tmp;
    return state;

Thanks in advance for any answer.

Edit: I compared both generators by generating numbers in the interval $]0,1[$ with the following codes:

    public int randomInt() {
        int tmp = (a*state)%p;
        if (tmp <= 0) state = tmp+p;
        else state = tmp;
        return state;

    public int randomInt2() {
        int tmp = a * (state % q) - r * (state/q);
        if (tmp < 0) state = tmp+p;
        else state = tmp;
        return state;

    public double randomReal() {
        return randomInt()/(double)p;

    public double randomReal2() {
        return randomInt2()/(double)p;

And the Main method:


public class Main {

public static void main(String[] args) {

    File file = new File("resultatsGC.xls");
    FileWriter fw = null;
    Random rand = new Random();
    Random rand2 = new Random();
    double x, y;
    int[] tab = {0,0,0,0,0,0,0,0,0,0};
    int[] taby = {0,0,0,0,0,0,0,0,0,0};

    try {

         for (int i = 1 ; i < Math.pow(2, 31)-1 ; i++) {
            x = rand.randomReal();
            y = rand2.randomReal2();
            for (int j = 0 ; j < 10 ; j++) {
                if (x > j/(double)10 && x < (j+1)/(double)10) {
                if (y > j/(double)10 && y < (j+1)/(double)10) {
            if (i%100000000 == 0)
                System.out.println(i+") "+x);

        fw = new FileWriter(file, false);

        //Write in file the results
        for (int i = 0 ; i < 10 ; i++) {
            double a = ((double)i)/10;
            double b = ((double)(i+1))/10;
            String str = "["+a+";"+b+"]\t";


    } catch (FileNotFoundException e) {
    } catch (IOException e) {
    } finally {
        try {
            if (fw != null)
        } catch (IOException e) {

And here is what it graphically gives: Results, view 1

Results, view 2

Orange columns are the results of the algorithm that I had a question about. Clearly, it is better (much more uniform), but my question remains: why is it correct (even more correct than the other)?

by Laurent Hayez at October 31, 2014 02:41 AM


Add two lists as single digits on scheme

I'm trying to create a function that takes in two lists of single digits and adds them.

e.g. x = (4 2 0 1), y= (4 2 0 1). x+y = (8 4 0 2).

so far I have:

(define list+
  (lambda (d1 d2) 
      (map + d1 d2)))

This works perfect for the above example but I'm unsure how to deal with situation such as:

x = (5 2 0 1), y= (5 2 0 1). x+y = (0 5 0 2). Using the above code gives (10 4 0 2).

by Ben at October 31, 2014 02:38 AM




Stochastic processes- question about probability of continuous variables

I'm having a hard time understanding something conceptually. I understand that the probability of getting any specific value from a continuum is 0. However, my professor says that since there are infinitesimally number of zeros within that continuum, they eventually add up to 1 using integration, but I am really not seeing how exactly is it that they add up to one. He keeps comparing it to finding the area under a curve via integration, but I kinda don't see the relation between them since..I just don't understand what it means to add up a bunch of zeros. It's been a while since I took Calculus so maybe the problem is that I don't fully understand integration?

by FrostyStraw at October 31, 2014 01:56 AM


24 year old special operations vet starting degree in Computer Science this spring. I lack the depth of knowledge base to correctly articulate my desired end state so how could I ever correctly and accurately focus my energy. I need your help.

I just ETSd from the the Army. Spent 5 years and some change chewing rocks and dragging my knuckles through a couple trips downrange. I will be formally introduced to Computer Science January 11th. As I was looking at the course catalog I was overwhelmed by my lack of ability to translate a class name into its real world application or value. I have a very small knowledge base and have been working diligently trying to prepare by reading everything I can get my hands on and working through the Learn Python the Hard Way. I understand that computer science is ambiguous and often misunderstood and is really more about math than code. I am not looking for any advice on learning or where to start.

Through a few deployments I had to have the opportunity to get a small glimpse of the unreal amount of raw intelligence that was able to be turned into actionable intelligence. Through some time spent on google and this realization and amazement I started to dive into what kind of technical infrastructure it would take to execute this and I have been hooked ever since, drinking from the firehose it seems like most of the time. I am on my second time going through Learn Python the Hard Way, jump from language to language on code academy and review math concepts with new effort to "break down problems in an analytical way."

The problem I have come to is that I know my desired end state or at least I know that I want to concentrate my education more towards encryption, information security, privacy, data mining and analysis. I mean my dream job would be being able to have some effect in battling back the aggressive intrusion by the government through the snowden files through.

So understanding that al this information is open source and that you just have to learn. I know what I would tell a new private if he asked me how to become a barrel chested freedom fighter and be able to grow a beard. I would tell him to do it the hard way and work hard as hell, rinse and repeat. But honestly I also know that there is a lot better way of explaining and describing the reality that would better move his career towards special operations or whatever the end goal may be. crawl, walk, run I got it. 5m target before 25m target. I get it and will continue to aggressively seek knowledge and learn what I can without direction but I would appreciate if someone could give me better articulated explanation of what exactly I am interested in and what applications and skill set best sets me up or better to direct my energy in a general direction.

I apologize in advance if this is the inappropriate location for this or if this is just a stupid and naive question but whatever the case I'm going to keep drinking from the firehose!

submitted by realArtVandaley
[link] [10 comments]

October 31, 2014 01:54 AM


Clojure - parse string to date

I'm trying to parse a unformatted string which contains a date (e.g. today = "08082013") to the format "08.08.2013".

This works: (.parse (java.text.SimpleDateFormat. "ddMMyyyy") today) => <Date Sun Jan 01 00:00:00 CET 1950>

But when I do (.parse (java.text.SimpleDateFormat. "dd.MM.yyyy") today) I get the error "Unparseable date: "08082013"

Why? How can I get my desired date format?

by m-arv at October 31, 2014 01:43 AM



arXiv Logic in Computer Science

Cut-elimination and the decidability of reachability in alternating pushdown systems. (arXiv:1410.8470v1 [cs.LO])

We give a new proof of the decidability of reachability in alternating pushdown systems, showing that it is a simple consequence of a cut-elimination theorem for some natural-deduction style inference systems. Then, we show how this result can be used to extend an alternating pushdown system into a complete system where for every configuration $A$, either $A$ or $\neg A$ is provable.

by <a href="">Gilles Dowek</a>, <a href="">Ying Jiang</a> at October 31, 2014 01:30 AM

Optimal Deployment of Geographically Distributed Workflow Engines on the Cloud. (arXiv:1410.8359v1 [cs.DC])

When orchestrating Web service workflows, the geographical placement of the orchestration engine(s) can greatly affect workflow performance. Data may have to be transferred across long geographical distances, which in turn increases execution time and degrades the overall performance of a workflow. In this paper, we present a framework that, given a DAG-based workflow specification, computes the op- timal Amazon EC2 cloud regions to deploy the orchestration engines and execute a workflow. The framework incorporates a constraint model that solves the workflow deployment problem, which is generated using an automated constraint modelling system. The feasibility of the framework is evaluated by executing different sample workflows representative of sci- entific workloads. The experimental results indicate that the framework reduces the workflow execution time and provides a speed up of 1.3x-2.5x over centralised approaches.

by <a href="">Long Thai</a>, <a href="">Adam Barker</a>, <a href="">Blesson Varghese</a>, <a href="">Ozgur Akgun</a>, <a href="">Ian Miguel</a> at October 31, 2014 01:30 AM

Executing Bag of Distributed Tasks on the Cloud: Investigating the Trade-offs Between Performance and Cost. (arXiv:1410.8357v1 [cs.DC])

Bag of Distributed Tasks (BoDT) can benefit from decentralised execution on the Cloud. However, there is a trade-off between the performance that can be achieved by employing a large number of Cloud VMs for the tasks and the monetary constraints that are often placed by a user. The research reported in this paper is motivated towards investigating this trade-off so that an optimal plan for deploying BoDT applications on the cloud can be generated. A heuristic algorithm, which considers the user's preference of performance and cost is proposed and implemented. The feasibility of the algorithm is demonstrated by generating execution plans for a sample application. The key result is that the algorithm generates optimal execution plans for the application over 91\% of the time.

by <a href="">Long Thai</a>, <a href="">Blesson Varghese</a>, <a href="">Adam Barker</a> at October 31, 2014 01:30 AM

Insights from the Nature for Cybersecurity. (arXiv:1410.8317v1 [cs.CR])

Nature has over 3.8 billion years of experience in developing solutions to the challenges facing organisms living in extremely diverse conditions. We argue that current and future cybersecurity solutions should take advantage of nature, hence be designed, developed and deployed in a way that covers as many features from the proposed, nature-based PROTECTION framework.

by <a href="">Elzbieta Rzeszutko</a>, <a href="">Wojciech Mazurczyk</a> at October 31, 2014 01:30 AM

Cost Preserving Bisimulations for Probabilistic Automata. (arXiv:1410.8314v1 [cs.FL])

Probabilistic automata constitute a versatile and elegant model for concurrent probabilistic systems. They are equipped with a compositional theory supporting abstraction, enabled by weak probabilistic bisimulation serving as the reference notion for summarising the effect of abstraction.

This paper considers probabilistic automata augmented with costs. It extends the notions of weak transitions in probabilistic automata in such a way that the costs incurred along a weak transition are captured. This gives rise to cost-preserving and cost-bounding variations of weak probabilistic bisimilarity, for which we establish compositionality properties with respect to parallel composition. Furthermore, polynomial-time decision algorithms are proposed, that can be effectively used to compute reward-bounding abstractions of Markov decision processes in a compositional manner.

by <a href="">Andrea Turrini</a>, <a href="">Holger Hermanns</a> at October 31, 2014 01:30 AM

System description: Isabelle/jEdit in 2014. (arXiv:1410.8222v1 [cs.LO])

This is an updated system description for Isabelle/jEdit, according to the official release Isabelle2014 (August 2014). The following new PIDE concepts are explained: asynchronous print functions and document overlays, syntactic and semantic completion, editor navigation, management of auxiliary files within the document-model.

by <a href="">Makarius Wenzel</a> (Univ. Paris-Sud, Laboratoire LRI, UMR8623) at October 31, 2014 01:30 AM

PIDE for Asynchronous Interaction with Coq. (arXiv:1410.8221v1 [cs.HC])

This paper describes the initial progress towards integrating the Coq proof assistant with the PIDE architecture initially developed for Isabelle. The architecture is aimed at asynchronous, parallel interaction with proof assistants, and is tied in heavily with a plugin that allows the jEdit editor to work with Isabelle.

We have made some generalizations to the PIDE architecture to accommodate for more provers than just Isabelle, and adapted Coq to understand the core protocol: this delivered a working system in about two man-months.

by <a href="">Carst Tankink</a> (Inria Saclay - &#xce;le-de-France) at October 31, 2014 01:30 AM

The Certification Problem Format. (arXiv:1410.8220v1 [cs.LO])

We provide an overview of CPF, the certification problem format, and explain some design decisions. Whereas CPF was originally invented to combine three different formats for termination proofs into a single one, in the meanwhile proofs for several other properties of term rewrite systems are also expressible: like confluence, complexity, and completion. As a consequence, the format is already supported by several tools and certifiers. Its acceptance is also demonstrated in international competitions: the certified tracks of both the termination and the confluence competition utilized CPF as exchange format between automated tools and trusted certifiers.

by <a href="">Christian Sternagel</a> (University of Innsbruck, Austria), <a href="">Ren&#xe9; Thiemann</a> (University of Innsbruck, Austria) at October 31, 2014 01:30 AM

A Logic-Independent IDE. (arXiv:1410.8219v1 [cs.LO])

The author's MMT system provides a framework for defining and implementing logical systems. By combining MMT with the jEdit text editor, we obtain a logic-independent IDE. The IDE functionality includes advanced features such as context-sensitive auto-completion, search, and change management.

by <a href="">Florian Rabe</a> (Jacobs University Bremen) at October 31, 2014 01:30 AM

Advanced Proof Viewing in ProofTool. (arXiv:1410.8218v1 [cs.LO])

Sequent calculus is widely used for formalizing proofs. However, due to the proliferation of data, understanding the proofs of even simple mathematical arguments soon becomes impossible. Graphical user interfaces help in this matter, but since they normally utilize Gentzen's original notation, some of the problems persist. In this paper, we introduce a number of criteria for proof visualization which we have found out to be crucial for analyzing proofs. We then evaluate recent developments in tree visualization with regard to these criteria and propose the Sunburst Tree layout as a complement to the traditional tree structure. This layout constructs inferences as concentric circle arcs around the root inference, allowing the user to focus on the proof's structural content. Finally, we describe its integration into ProofTool and explain how it interacts with the Gentzen layout.

by <a href="">Tomer Libal</a> (Microsoft Research - Inria Joint Center, Ecole Polytechnique), <a href="">Martin Riener</a> (Institute of Computer Languages, Vienna University of Technology), <a href="">Mikheil Rukhaia</a> (Institute of Applied Mathematics, Tbilisi State University) at October 31, 2014 01:30 AM

Tinker, tailor, solver, proof. (arXiv:1410.8217v1 [cs.LO])

We introduce Tinker, a tool for designing and evaluating proof strategies based on proof-strategy graphs, a formalism previously introduced by the authors. We represent proof strategies as open-graphs, which are directed graphs with additional input/output edges. Tactics appear as nodes in a graph, and can be `piped' together by adding edges between them. Goals are added to the input edges of such a graph, and flow through the graph as the strategy is evaluated. Properties of the edges ensure that only the right `type' of goals are accepted. In this paper, we detail the Tinker tool and show how it can be integrated with two different theorem provers: Isabelle and ProofPower.

by <a href="">Gudmund Grov</a> (Heriot-Watt University), <a href="">Aleks Kissinger</a> (University of Oxford), <a href="">Yuhui Lin</a> (Heriot-Watt University) at October 31, 2014 01:30 AM

UTP2: Higher-Order Equational Reasoning by Pointing. (arXiv:1410.8216v1 [cs.LO])

We describe a prototype theorem prover, UTP2, developed to match the style of hand-written proof work in the Unifying Theories of Programming semantical framework. This is based on alphabetised predicates in a 2nd-order logic, with a strong emphasis on equational reasoning. We present here an overview of the user-interface of this prover, which was developed from the outset using a point-and-click approach. We contrast this with the command-line paradigm that continues to dominate the mainstream theorem provers, and raises the question: can we have the best of both worlds?

by <a href="">Andrew Butterfield</a> (Trinity College Dublin) at October 31, 2014 01:30 AM

How to Put Usability into Focus: Using Focus Groups to Evaluate the Usability of Interactive Theorem Provers. (arXiv:1410.8215v1 [cs.LO])

In recent years the effectiveness of interactive theorem provers has increased to an extent that the bottleneck in the interactive process shifted to efficiency: while in principle large and complex theorems are provable (effectiveness), it takes a lot of effort for the user interacting with the system (lack of efficiency). We conducted focus groups to evaluate the usability of Isabelle/HOL and the KeY system with two goals: (a) detect usability issues in the interaction between interactive theorem provers and their user, and (b) analyze how evaluation and survey methods commonly used in the area of human-computer interaction, such as focus groups and co-operative evaluation, are applicable to the specific field of interactive theorem proving (ITP).

In this paper, we report on our experience using the evaluation method focus groups and how we adapted this method to ITP. We describe our results and conclusions mainly on the "meta-level," i.e., we focus on the impact that specific characteristics of ITPs have on the setup and the results of focus groups. On the concrete level, we briefly summarise insights into the usability of the ITPs used in our case study.

by <a href="">Bernhard Beckert</a> (Karlsruhe Institute of Technology (KIT)), <a href="">Sarah Grebing</a> (Karlsruhe Institute of Technology (KIT)), <a href="">Florian B&#xf6;hl</a> (Karlsruhe Institute of Technology (KIT)) at October 31, 2014 01:30 AM

Proportional-Integral Clock Synchronization in Wireless Sensor Networks. (arXiv:1410.8176v1 [cs.DC])

In this article, we present a new control theoretic distributed time synchronization algorithm, named PISync, in order to synchronize sensor nodes in Wireless Sensor Networks (WSNs). PISync algorithm is based on a Proportional-Integral (PI) controller. It applies a proportional feedback (P) and an integral feedback (I) on the local measured synchronization errors to compensate the differences between the clock offsets and clock speeds. We present practical flooding-based and fully distributed protocol implementations of the PISync algorithm, and we provide theoretical analysis to highlight the benefits of this approach in terms of improved steady state error and scalability as compared to existing synchronization algorithms. We show through real-world experiments and simulations that PISync protocols have several advantages over existing protocols in the WSN literature, namely no need for memory allocation, minimal CPU overhead and code size independent of network size and topology, and graceful performance degradation in terms of network size.

by <a href="">Kas&#x131;m Sinan Y&#x131;ld&#x131;r&#x131;m</a>, <a href="">Ruggero Carli</a>, <a href="">Luca Schenato</a> at October 31, 2014 01:30 AM

On finding orientations with fewest number of vartices with small out-degree. (arXiv:1410.8154v1 [cs.DM])

Given an undirected graph, each of the two end-vertices of an edge can own the edge. Call a vertex poor, if it owns at most one edge. We give a polynomial time algorithm for the problem of finding an assignment of owners to the edges which minimizes the number of poor vertices. In the terminology of graph orientation, this means finding an orientation for the edges of a graph minimizing the number of edges with out-degree at most 1, and answers a question of Asahiro Jansson, Miyano, Ono (2014).

by <a href="">Kaveh Khoshkhah</a> at October 31, 2014 01:30 AM


Clojure / ClojureScript Crossovers and cljx

I'm trying to figure out the relationship between ClojureScript crossovers and the cljx pre-processor.

Are they designed to be used together? Or rival solutions to the same problem?

Is one becoming the preferred or more standard way to do things?

In particular what I want to do is to create a single library that can be compiled as Clojure and ClojureScript (with a couple of variations). I'm currently using cljx for this.

But then I want to include the library in further clj and cljx projects. Looking for information about this, I'm largely coming across documentation for crossovers but not cljx.

by interstar at October 31, 2014 01:27 AM




UML Help?

I have to make a UML diagram for my Java class project. I was wondering if anyone could help me with UML as I have never used it before. And the googling for help hasn't worked so far.

submitted by Thrar
[link] [6 comments]

October 31, 2014 01:06 AM


Brasilien will ein Kabel nach Portugal legen, um der ...

Brasilien will ein Kabel nach Portugal legen, um der NSA zu entgegen. Das Kabel soll ohne jede Mithilfe von US-Firmen gelegt werden.

Das wird natürlich auch nicht funktionieren. Die Amis haben schon vor fast 40 Jahren Unterseekabel angezapft. Insofern ist das nur eine politische Geste, aber eine schöne.

October 31, 2014 01:01 AM

Der "5-Sterne-Knast" in der Justizvollzugsanstalt Tegel ...

Der "5-Sterne-Knast" in der Justizvollzugsanstalt Tegel ist doch nicht so 5-sternig. Konkret ist die Heizung nicht fertig geworden. Hey, wer braucht schon eine Heizung bei 5 Grad Außentemperatur!

October 31, 2014 01:01 AM

Portland Pattern Repository

Planet Theory

Betweenness Centrality in Dense Random Geometric Networks

Authors: Alexander P. Giles, Orestis Georgiou, Carl P. Dettmann
Download: PDF
Abstract: Random geometric networks are mathematical structures consisting of a set of nodes placed randomly within a bounded set $\mathcal{{V}}\subseteq\mathbb{R}^{d}$ mutually coupled with a probability dependent on their Euclidean separation, and are the classic model used within the expanding field of ad-hoc wireless networks. In order to rank the importance of network nodes, we consider the well established `betweenness' centrality measure (quantifying how often a node is on a shortest path of links between any pair of nodes), providing an analytic treatment of betweenness within a random graph model using a continuum approach by deriving a closed form expression for the expected betweenness of a node placed within a dense random geometric network formed inside a disk of radius $R$. We confirm this with numerical simulations, and discuss the importance of the formula for mitigating the `boundary effect' connectivity phenomenon, for cluster head node election protocol design and for detecting the location of a network's `vulnerability backbone'.

October 31, 2014 12:41 AM

A 13k-kernel for Planar Feedback Vertex Set via Region Decomposition

Authors: Marthe Bonamy, Lukasz Kowalik
Download: PDF
Abstract: We show a kernel of at most $13k$ vertices for the Planar Feedback Vertex Set problem restricted to planar graphs, i.e., a polynomial-time algorithm that transforms an input instance $(G,k)$ to an equivalent instance with at most $13k$ vertices. To this end we introduce a few new reduction rules. However, our main contribution is an application of the region decomposition technique in the analysis of the kernel size. We show that our analysis is tight, up to a constant additive term.

October 31, 2014 12:41 AM

Binary Determinantal Complexity

Authors: Jesko Hüttenhain, Christian Ikenmeyer
Download: PDF
Abstract: We prove that for writing the 3 by 3 permanent polynomial as a determinant of a matrix consisting only of zeros, ones, and variables as entries, a 7 by 7 matrix is required. Our proof is computer based and uses the enumeration of bipartite graphs. Furthermore, we analyze sequences of polynomials that are determinants of polynomially sized matrices consisting only of zeros, ones, and variables. We show that these are exactly the sequences in the complexity class of constant free polynomially sized (weakly) skew circuits.

October 31, 2014 12:41 AM

Learning circuits with few negations

Authors: Eric Blais, Clément L. Canonne, Igor C. Oliveira, Rocco A. Servedio, Li-Yang Tan
Download: PDF
Abstract: Monotone Boolean functions, and the monotone Boolean circuits that compute them, have been intensively studied in complexity theory. In this paper we study the structure of Boolean functions in terms of the minimum number of negations in any circuit computing them, a complexity measure that interpolates between monotone functions and the class of all functions. We study this generalization of monotonicity from the vantage point of learning theory, giving near-matching upper and lower bounds on the uniform-distribution learnability of circuits in terms of the number of negations they contain. Our upper bounds are based on a new structural characterization of negation-limited circuits that extends a classical result of A. A. Markov. Our lower bounds, which employ Fourier-analytic tools from hardness amplification, give new results even for circuits with no negations (i.e. monotone functions).

October 31, 2014 12:40 AM

AC-Feasibility on Tree Networks is NP-Hard

Authors: Karsten Lehmann, Alban Grastien, Pascal Van Hentenryck
Download: PDF
Abstract: Recent years have witnessed significant interest in convex relaxations of the power flows, several papers showing that the second-order cone relaxation is tight for tree networks under various conditions on loads or voltages. This paper shows that AC-feasibility, i.e., to find whether some generator dispatch can satisfy a given demand, is NP-Hard for tree networks.

October 31, 2014 12:40 AM




HN Daily

Planet Theory

Drawing Partially Embedded and Simultaneously Planar Graphs

Authors: Timothy M. Chan, Fabrizio Frati, Carsten Gutwenger, Anna Lubiw, Petra Mutzel, Marcus Schaefer
Download: PDF
Abstract: We investigate the problem of constructing planar drawings with few bends for two related problems, the partially embedded graph problem---to extend a straight-line planar drawing of a subgraph to a planar drawing of the whole graph---and the simultaneous planarity problem---to find planar drawings of two graphs that coincide on shared vertices and edges. In both cases we show that if the required planar drawings exist, then there are planar drawings with a linear number of bends per edge and, in the case of simultaneous planarity, a constant number of crossings between every pair of edges. Our proofs provide efficient algorithms if the combinatorial embedding of the drawing is given. Our result on partially embedded graph drawing generalizes a classic result by Pach and Wenger which shows that any planar graph can be drawn with a linear number of bends per edge if the location of each vertex is fixed.

October 31, 2014 12:00 AM

October 30, 2014


Master method question [duplicate]

This question already has an answer here:

Given the following equation: T(n)=5∗T(n/3)+4n

Am I right that a = 5, b = 3, d = 4 which would fulfill case 2 of the Master Method (a < b^d), and gives O(n^d) running time, or am I missing something? The choices aren't so straight forward, but none seem to match.

by kiss-o-matic at October 30, 2014 11:21 PM


Call 2 functions in ML

I am new to ML and I am trying to make a list of fibonacci results. I know that after you write a semicolon, the function will return, how can I call 2 functions in one else clause? I looked into let, and end. Suggestions would be appreciated.

   fun fib a =
    if a <= 2 then 1
    else fib(a - 1) + fib(a - 2);

    fun fib_help lst =
     if null(lst)
      then 0
       fib(hd(lst)); (*How can I call*)
       fib_help(tl(lst)); (*both of these functions? *)


I get the following error output

val fib = fn : int -> int val fib_help = fn : int list -> int Error: unbound variable or constructor: lst


by user3312266 at October 30, 2014 11:15 PM


How to solve the following recurrence: g(n) = g(log n) + n^(1/2) [duplicate]

This question already has an answer here:

Doesn't fit the Master method, and I am not sure where to go from here. Thanks

by Yechiel Labunskiy at October 30, 2014 11:09 PM


Akka Named Resource Serial Execution

I'm looking for suggestions on how to accomplish the following. My Akka application, which will be running as a cluster, will be persisting to a backend web service. Each resource I'm persisting is named. For example: A, B, C

There will be a queue of changes for each resource, and I'm looking for an idea on how I can have a configuration which allows me to control the following:

  • Maximum number of REST calls in progress at any point in time (overall concurrency)
  • Ensure that only one REST request for each named resource is in progress
  • It's fine for concurrent requests, as long as they are not for the same resource
  • The named resources are dynamic, based on records in a database


by Innominate at October 30, 2014 11:04 PM


Portland Pattern Repository


Scala Akka TCP Actors

I have a question about the Akka 2.4 TCP API.

I running a server and have 2 TCP servers in Akka TCP, one for incoming clients and one for my server's worker nodes (which are on other computers/IPs). I have one current connection to a client, and one connection to a worker node.

If receiving a message from a client, I want to pass some of that information to the worker node, but my TCP Akka Actor representing the worker node connection doesn't seem to like when I, from the thread running the Client Akka Actor, send messages to the Akka Actor worker node.

So, as an example, if the client sends a message to delete a file, and that partitions on that file is on a worker node, I want to send a TCP message to that worker node that it should delete the partitions.

How can I from the client Actor send a message to the worker node Actor, that it should pass to the worker node server through TCP? When just doing the regular workerActorRef ! msg it doesn't receive it at all and no logging is shown.

I hope this question wasn't unclear but essentially I want the workerActorRef to in some way be able to have some functionality similar to "send this through the TCP socket".



by Johan S at October 30, 2014 10:57 PM


English Student Halloween Comics, and Other Spooky Specials

Amazing note @Kichiru yesterday:

Click for bigger:

the best

There’s a second one too! Adorable.

I love this. I’m so pleased to have my work enter the pantheon of English-student-written comics (a genre ably represented by Dinosaur Comics and Nedroid, among others).

The comic Kichiru shared with his students was a collaboration with KC Green — I wrote it, he drew it! It’s called “Emmy & the Eggs” and we made it as a Halloween special a few years ago.

If you missed it, it’s in three parts here, here, and here.


BONUS LINK: Speaking of spooky specials, the Tweet Me Harder Halloween episode is still online for your podcast entertainment!

Kris and Mikey just released a Halloween episode of their Chainsawsuit podcast, as well.

Do you have a favorite Halloween special that folks should read or watch or listen to? Post it in the comments!

by David Malki at October 30, 2014 10:50 PM

DragonFly BSD Digest

BSDNow 061: IPSECondwind

As you may be able to guess, BSDNow episode 061 has an interview with John-Mark Gurney about updating FreeBSD’s IPSEC setup, along with the normal collection of news items.  There’s also a link to a new BSD-switching blog, and “mailing list gold”.

by Justin Sherrill at October 30, 2014 10:39 PM


Does the Y combinator contradict the Curry-Howard correspondence?

The Y combinator has the type $(a \rightarrow a) \rightarrow a$. By the Curry-Howard Correspondence, because the type $(a \rightarrow a) \rightarrow a$ is inhabited, it must correspond to a true theorem. However $a \rightarrow a$ is always true, so it appears as if the Y combinator's type corresponds to the theorem $a$, which is not always true. How can this be?

by Joshua at October 30, 2014 10:38 PM



Understanding the weak-OWF exists -> OWF exists proof

This is a proof that I've gone back to many times over the last few years and while I can read it and easily verify the steps, it seems like it's a proof, where I will always essentially forget the details, i.e. if I read it today, I would struggle to write down a full proof tomorrow without actually spending quite a bit of effort. I'm talking about the proof that's e.g. in Goldreich's Foundations of Crypto book, which I believe is standard (I've never seen a different proof).

As complexity theory is not my field, I would hope that people who are more experienced in the field could make sense of the proof by answering the following questions:

  1. What part of the proof are completely standard techniques?

  2. What part of the proof if any is a "trick".

The basic idea is easy: Given a weak one-way function, repeat it many times, so that any inverter of the repeated function needs to invert all the pieces. Dealing with the lack of independence in processing the components in the inversion is then the tricky part and I'm hoping that there's a way to split it up into standard arguments. At least understanding it in useful pieces that might be applicable elsewhere would be nice, if possible.

by JT1 at October 30, 2014 10:28 PM

DragonFly BSD Digest

dports for DragonFly 4.0

Despite my complete lack of good planning, John Marino and Francois Tigeot have packages available for the DragonFly 4.0 release candidate that I assembled.  Point at this directory to use them.

by Justin Sherrill at October 30, 2014 10:25 PM


Formal proof or counter-example? Formal languages problem [on hold]

I just started a course called 'Automata and Formal Languages'. I'm having difficulty in proofing\disproofing this equality.

$ (L_{1} \circ L_{2})^{+} = L_{1}^{+} \circ L_{2}^{+} $


$ L_{1} $, $L_{2}$ are Languages.

$\circ$ is the concatenation operation between two languages.

$+$ is the Kleene plus closure defined by $\bigcup _{i = 1}^{\infty }L^{i} $

I tried finding a counter example and also tried to formally proof but had no luck. Can someone please point me in the correct direction?


by IsraelElmekies at October 30, 2014 10:18 PM


Keybindings not taking effect -- Best practices

I've been using Emacs for around a month now and still have much to learn. I'm mainly developing in C++, and have used the Emacs configuration from Tuhdo (which is a great tutorial for getting started) and have ultimately decided to use redguardtoo's emacs file.

I'm using EVIL and have set the variable so that EVIL using C-u to


I'm trying to rebind it universal-argument to C-c C-u by performing the following command

(global-set-key (kbd "C-c C-u") 'universal-argument) 

When running this command in C++ major mode, describe-key says that this keymap is bound to "cc-cmds.el" in c++-mode-map.

My understanding is that the order of priority (highest to lowest) for keymaps is essentially: local/minor, major, global, so it makes sense that the global key is not in effect.

I've done some research and am not sure of the best options to approach this. I'd really appreciate suggestions so I can learn how to do this correctly. It would be nice if you could describe caveats or pros/cons to each of the suggestions

  • Create your own minor mode
  • Define-key c++-mode-map for this keybinding. However, my concern is that loading order matters. How can I ensure that my binding runs after cc-cmds.el (note, when I click on cc-cmds.el in describe-key, it can't even find the file!)

Questions (summary):

  • What's the best approach to handle this?
  • If major mode map is the solution, how can I guarantee that my define-key is run after the problem binding?
  • Is there a program or way to determine if a keybinding gets overwritten? redguardtoo's emacs file has a lot of bindings, and it'd be nice to get a message every time a keybinding gets assigned multiple times so that I can clean it up
submitted by cheezy64
[link] [6 comments]

October 30, 2014 10:09 PM


What college comp-sci lab room station design layout works best?

Anybody have ideas on how to structure the lab room layout to optimize the comp-sci/programming learning experience? We have a square room roughly 40' on a side with a ceiling mount projector and a requirement for 20 stations (including tables). Is it better to have the stations all face front, or in rows, or in 4/pod-unit or in some other orientation? We like to have the students work together on some assignments.

submitted by ClubSoda
[link] [3 comments]

October 30, 2014 10:03 PM


Default Contructor of Class

I have a class in Scala like this:

abstract class ErrorMessages(val message: String, val publicMessage: String, val objectData: Option[Object], val backtrace: Exception)

That I use a loot as sub classes, for example

case class MyError(override val message: String, override val publicMessage: String, override val objectData: Option[Object], override val backtrace: Exception) extends ErrorMessages(message, publicMessage, objectData, backtrace)

My quested here is can I do the last thing more easy as case class MyError(...) extends ErrorMessages(...) or simulary for short down the code need for create a new instance of my errpr.

by FIG-GHD742 at October 30, 2014 09:59 PM


Arbitrage-free market for continuous logreturn distribution?

Is it true, that a one-period market say $(0,t)$ is arbitrage-free if the logreturn for $S_t$ is continuously distributed on $\mathbb{R}$?

I.e., for continuous distributions on $\mathbb{R}$, there always exists a martingale measure?

E.g. for multinomial model the market is arbitrage free if $r_1<r_f<r_m$, such that on $\mathbb{R}$ for a continuous distribution we would have $-\infty<r_f<\infty$ (which is always true).

by emcor at October 30, 2014 09:40 PM


OpenGL(lwjgl) incorrect orientation normals

I am using lwjgl in scala to try to properly light a chess pawn and some spheres.

Based on this picture:


I think half of the normals of the faces are flipped. But I do not know how to flip the incorrect half.

The shader currently being used is a gouraud shader (see gouraud.fragment and gouraud.vertex). I think the problem occurs either in the shader or in the sphere class (extension of drawable).

What is weird is that the shading is perfect when I run the program on my friend's laptop, who has an AMD GPU.

I am using an NVIDIA 780 Ti on Ubuntu 14.04.

I have been trying to solve this for over 4 hours, so any help is immensely appreciated.

Gouraud Fragment Shader:

#version 430

layout (location = 0) in vec3 vertexPosition;
layout (location = 1) in vec3 vertexNormal;
layout (location = 2) in vec2 vertexTexturePosition; //unused

out vec3 position;
out vec3 vertexColor;

uniform mat4 projection;
uniform mat4 view;
uniform mat4 model;

uniform vec3 light;

uniform vec4 ambientProduct, diffuseProduct, specularProduct;
uniform float shininess;

void main(){
    position = (model*vec4(vertexPosition,1.0)).xyz;
    gl_Position = projection*view*model*vec4(vertexPosition, 1.0);

    vec3 normal = mat3(view*model)*vertexNormal;

    vec4 ambient, diffuse, specular;
    vec3 N = normalize(normal);
    vec3 L = normalize(light - position);
    vec3 E = -normalize(position);
    vec3 H = normalize(L+E);
    float Kd = max(dot(L, N), 0.0);
    float Ks = pow(max(dot(N, H), 0.0), shininess);
    ambient = ambientProduct;
    diffuse = Kd*diffuseProduct;
    specular = max(Ks*specularProduct, 0.0);
    vertexColor = (ambient + diffuse + specular).xyz;

Drawable Abstract Class:

package demo

import org.lwjgl.BufferUtils
import org.lwjgl.opengl.GL15._
import org.lwjgl.opengl.GL11._
import org.lwjgl.opengl.GL20._
import org.lwjgl.opengl.GL30._
import scala.collection.mutable.ArrayBuffer
import scala.collection.mutable.Buffer
import org.lwjgl.opengl.GL11
import org.lwjgl.opengl.GL15
import org.lwjgl.util.vector.Vector3f 

abstract class Drawable { 

  val vao = glGenVertexArrays()
  val vbo = initVBO()
  val ebo = initEBO()
  glVertexAttribPointer(0, 3, GL_FLOAT, true, 8*4, 0)

  protected def initVBO() : Int
  protected def initEBO() : Int

  def draw()

  val modelMatrix = new Matrix()

  def draw(translate : Vector3f = new Vector3f(0,0,0), scale : Vector3f = new Vector3f(1,1,1), rotation : Vector3f = new Vector3f(0,0,0), angle : Float = 0, material : Material ,shader : AbstractShader){

    glUniformMatrix4(shader.getLocation("model"), false, modelMatrix.toBuffer)

Sphere Class:

package demo

import org.lwjgl.opengl.GL15._
import org.lwjgl.opengl.GL11._
import org.lwjgl.opengl.GL20._
import org.lwjgl.opengl.GL30._
import scala.collection.mutable.ArrayBuffer
import org.lwjgl.BufferUtils
import org.lwjgl.util.vector.Vector3f
import collection.mutable

class Sphere(size : Int = 1)(detail : Int = 1) extends Drawable{

  case class Triangle(a : Vector3f, b : Vector3f, c : Vector3f){

    def subdivide(amount : Int) : Seq[Triangle] = {
      val x = ( a.getX() + b.getX() + c.getX())/3
      val y = ( a.getY() + b.getY() + c.getY())/3
      val z = ( a.getZ() + b.getZ() + c.getZ())/3
      val mid = new Vector3f(x,y,z)
      val t1 = Triangle(a,b,mid)
      val t2 = Triangle(b,c,mid)
      val t3 = Triangle(a,c,mid)
      if(amount == 0){


  lazy val triangles = {
    val a = new Vector3f(0,0,1)
    val b = new Vector3f(0f, 0.942809f, -0.333333f)
    val c = new Vector3f(-0.816497f, -0.471405f, -0.333333f)
    val d = new Vector3f(0.816497f, -0.471405f, -0.333333f)
    val t1 = Triangle(a,b,c)
    val t2 = Triangle(b,c,d)
    val t3 = Triangle(c,d,a)
    val t4 = Triangle(d,a,b)

  override protected def initVBO() = {
    val vbo = glGenBuffers()
    val buffer = BufferUtils.createFloatBuffer(triangles.size*3*8);
    for(t <- triangles){
      buffer put Array[Float](t.a.getX()*size,t.a.getY()*size,t.a.getZ()*size,t.a.getX(),t.a.getY(),t.a.getZ(),0,0)
      buffer put Array[Float](t.b.getX()*size,t.b.getY()*size,t.b.getZ()*size,t.b.getX(),t.b.getY(),t.b.getZ(),0,0)
      buffer put Array[Float](t.c.getX()*size,t.c.getY()*size,t.c.getZ()*size,t.c.getX(),t.c.getY(),t.c.getZ(),0,0)
    glBindBuffer(GL_ARRAY_BUFFER, vbo)
    glBufferData(GL_ARRAY_BUFFER, buffer, GL_STATIC_DRAW)

  override protected def initEBO() = {
    val ebo = glGenBuffers()
    val buffer = BufferUtils.createIntBuffer(triangles.size*3);
    for(n <- 0 until triangles.size*3){
      buffer put n 
    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo)

  override def draw() = {
    glDrawElements(GL_TRIANGLES, triangles.size*3, GL_UNSIGNED_INT,0)


Complete Code:

by Anton at October 30, 2014 09:37 PM

Scala IDE Template Editor Broke

I just downloaded the scala ide 4.0 release candidate 1 on my windows machine. I setup a basic play scala project and tried opening the the index.scala.html file with the New Play Editor and the file doesn't looks like this: enter image description here

So then I tried opening the file in the regular play editor and when i type, the characters are typed in reverse:

enter image description here

Anybody know how to go about fixing this?

by j will at October 30, 2014 09:30 PM

Use of Scala by-name parameters

I am going through the book "Functional Programming in Scala" and have run across an example that I don't fully understand.

In the chapter on strictness/laziness the authors describe the construction of Streams and have code like this:

sealed trait Stream[+A]
case object Empty extends Stream[Nothing]
case class Cons[+A](h: () => A, t: () => Stream[A]) extends Stream[A]

object Stream {
    def cons[A](hd: => A, tl: => Stream[A]) : Stream[A] = {
        lazy val head = hd
        lazy val tail = tl
        Cons(() => head, () => tail)

The question I have is in the smart constructor (cons) where it calls the constructor for the Cons case class. The specific syntax being used to pass the head and tail vals doesn't make sense to me. Why not just call the constructor like this:

Cons(head, tail)

As I understand the syntax used it is forcing the creation of two Function0 objects that simply return the head and tail vals. How is that different from just passing head and tail (without the () => prefix) since the Cons case class is already defined to take these parameters by-name anyway? Isn't this redundant? Or have I missed something?

by melston at October 30, 2014 09:26 PM


Data structure that allows moving groups of elements into buckets

I'm looking for a data structure that can do the following geometric operation:

Suppose there are a set of buckets $b_0, b_1..., b_n$ each of which contains some elements. Suppose I want to move all the elements in buckets $b_i$ where $i > k$ one bucket forward. So the elements in bucket $b_{k+1}$ would be moved to bucket $b_k$ and the elements in bucket $b_{k+2}$ would be moved to bucket $b_{k+1}$ and so on. The obvious way to move these elements is to go to each bucket and shift all the elements into the previous bucket. But is there a data structure that will allow me to move the buckets all at once?

I used buckets in the description because it's easier to formulate the question in terms of buckets. But the data structure doesn't necessarily have to be buckets. All I need is a data structure that can allow me to move "chunks" of elements in one go (for example shifting consecutive the buckets $x$ number of buckets forward all at once) and then allow me to query the structure as normal (for example, allow me to query $b_{k}$ now with all the elements shifted.

EDIT: I'm mainly interested in whether an already existing data structure that I'm not aware of does this.

by q2liu at October 30, 2014 09:22 PM


Quick method for approximate integer square roots

I'm looking for an algorithm that -- given a positive integer $n$ -- outputs a positive integer $\bar{n}$ with the following two properties:

  1. $(\bar{n}+1)^2>n$;
  2. $(\bar{n}-1)^2<n$;

So we have $\bar{n}=\lfloor\sqrt{n}\rfloor$ or $\bar{n}=\lceil\sqrt{n}\rceil$ for every $n$. The point is that I don't care which it is, and it needn't be consistently the same one. As long as properties (1) and (2) are always true, it doesn't matter.

I'd like to find a fast algorithm, especially if there exists one faster than simply doing Newton's Method a fixed number of times. As an example, here's an algorithm always satisfying (1), but unfortunately fails (2) [starting at $n=480$.]

  1. Take the first half of the binary representation of $n$ (call it $m$).
  2. Compute $\bar{n}=(m + n/m) / 2$. (that's integer division.)

Here's an example with $n=7$:

  1. $n=111_2$, so $m=11_2=3$.
  2. $\bar{n}=(3+7/3)/2=(3+2)/2=2$.

by Steve D at October 30, 2014 09:22 PM

Why having a simple multiplication loop and very good avalanche isn't enough to produce well-distributed hash values?

Modern non-cryptographic 32- and 64-bit valued hash functions, for example, lookup3, MurmurHash3 and CityHash, have quite sophisticated loops, each iteration of which include many multimplications, XORs and rotates. Why this is needed, since there are good avalanche procedures (32 -> 32 and 64 -> 64 bits), which mix the bits effectively randomly. So, even if this simple loop:

hash = seed;
while (input_len >= 8)
    hash += fetch_8_input_bytes() * prime;
... do something for rest 0..7 bytes of input

produce whatever flawed, biased results for similar input (or a certain subset of input), finalizing avalanche:

return good_avalanche(hash)

"fixes" this.

The last deal is direct collisions, but they are often not so important, assuming hash function produce decently little collisions. I think the above multimplication loop does.

What I'm missing?

by leventov at October 30, 2014 09:19 PM



Does “Second X is NP-complete” imply “X is NP-complete”?

"Second $X$" problem is the problem of deciding the existence of another solution different from some given solution for problem instance.

For some $NP$-complete problems, the second solution version is $NP$-complete (deciding the existence of another solution for the partial Latin square completion problem) while for others it is either trivial (Second NAE SAT) or it can not be $NP$-complete (Second Hamiltonian cycle in cubic graphs) under widely believed complexity conjecture. I am interested in the opposite direction.

We assume a natural $NP$ problem $X$ where there is natural efficient verifier that verifies a natural interesting relation $(x, c)$ where $x$ is an input instance and $c$ is a short witness of membership of $x$ in $X$. All witnesses are indistinguishable to the verifier. The validity of witnesses must be decided by running the natural verifier and it does not have any knowledge of any correct witness ( both examples in the comments are solutions by definition).

Does “Second $X$ is NP-complete” imply “$X$ is NP-complete” for all "natural" problems $X$?

In other words, Are there any "natural" problem $X$ where this implication fails?. Or equivalently,

Is there any "natural" problem $X$ in $NP$ and not known to be $NP$-complete but its Second $X$ problem is $NP$-complete?

EDIT: Thanks to Marzio's comments, I am not interested in contrived counter-examples. I am only interested in natural and interesting counter-examples for NP-complete problems $X$ similar to the ones above. An acceptable answer is either a proof of the above implication or a counter-example "Second X problem" which is defined for natural, interesting, and well known $NP$ problem $X$.

EDIT 2: Thanks to the fruitful discussion with David Richerby, I have edited the question to emphasis that my interest is only in natural problems $X$.

EDIT 3: Motivation: First, the existence of such implication may simplify the $NP$-completeness proofs of many $NP$ problems. Secondly, the existence of the implication links the complexity of deciding the uniqueness of solution to the problem of deciding existence of a solution for $NP$ problems.

by Mohammad Al-Turkistany at October 30, 2014 09:03 PM



Gradle tasks do not set Standard Input in IntelliJ

I have a project that uses the Leap Motion SDK which I managed to get running on both Windows and Linux. I'm having trouble reading with reading input when running from IntelliJ however.

My main sets up a controller and listener which dumps data to the console in another thread then the main thread blocks until a line is read then exits.

def main(args: Array[String]) {
  val listener = new SampleListener
  val controller = new Controller


  var input =

  while (true) Thread sleep (1000)


In IntelliJ it seems the standard input stream is not set, so the readLine return immediately and the program exits. I've had to put in an infinite loop of sleeps to keep keep it running whilst in the IDE. Further, stopping the gradle task from IntelliJ doesn't actually kill the java app.

I read that adding the standardInput line following my buildscript will make sure the standard input is not null but it still doesn't work in IntelliJ:

run {
    main = ''
    standardInput =
    jvmArgs = [ "-Djava.library.path=lib"]

If you know how to solve either of these problems please let me know, thank you.

by perryperry at October 30, 2014 08:30 PM

Clojure Functional reactive programming (FRP) with Lamina: Simple clock code?

I'm using Lamina to implement Functional reactive programming (FRP).

As a starter, I try to code a very simple clock in order to understand the library basics.

According to the Lamina 0.5.0-rc4 API documentation, there is lamina.time API:

I want to implement a very simple clock where:

  • Interval of every second as Observable time Streaming Collection/List/Seq (I don't quite understand the difference yet) (EDIT: now I understood it's called Channels on Lamina)

  • Now as Observable Streaming data

  • Println Now on every second (subscribe or for-each Observable time Collection)

Any feedback is welcome. Thanks.

EDIT: I quit.

After some research, I conclude the best way to code FRP is ClojureScript with RxJs(ReactiveExtention from MS).

See the example Code for ClojureScript + RxJs + node.js in my related Qestion Here: ClojureScript on node.js, code

by Ken OKABE at October 30, 2014 08:19 PM

Extract only first match with Regex in Scala

I'm trying to extract a specific string value from a JSON-formatted string in Scala. However, this is going to be used in a production environment, so I'm concerned about efficiency. Currently, I'm currently using the bit of code below:

val r = """identifier=\{S: ([\w\.]+),""".r
var identifier: String = "";
r.findAllIn(queryResult toString).matchData foreach {
  m => identifier =

My concern is efficiency. I don't need to validate the JSON itself (that's being produced by AWS, so I'm assuming it's good, and even if it's not, I can't change it), so there's no good reason to go through all the overhead of parsing it out.

That said, can I do this more efficiently with a regex, or would I have to go down to the level of finding the first occurence of 'identifier={S: ', then the next occurence of ',' after that, and get the substring between the two? I was trying to do something with r.findFirstIn but I can't figure out a way to extract the group I want from that.

Or is there some other super efficient thing I'm not aware of that I could be doing?

by soong at October 30, 2014 08:15 PM

Using Xeon Phi with JVM-based language

Is it possible to use Xeon Phi using JVM-based language such as Scala? Is there any example?

by Kokizzu at October 30, 2014 08:12 PM



Finite number of Turing machines running concurrently on multi-tapes Turing-machine-equivalent?

So basically, there are several (finite number of) Turing machines being able to read off and write to the same set of tapes (the number of tapes is finite, but each tape may have infinite tape spaces). And then we consider a group/machine of these machines to solve a problem. Would this machine be Turing machine-equivalent?

by Lamos at October 30, 2014 08:04 PM


Write a lazy sequence of lines to a file in ClojureClr without reflection

I have a large lazy seq of lines that I want to write to a file. In C#, I would use System.IO.File/WriteAllLines which has an overload where the lines are either string[] or IEnumerable<string>.

I want to do this without using reflection at runtime.

(set! *warn-on-reflection* true)

(defn spit-lines [^String filename seq]
  (System.IO.File/WriteAllLines filename seq))

But, I get this reflection warning.

Reflection warning, ... - call to WriteAllLines can't be resolved.

In general I need to know when reflection is necessary for performance reasons, but I don't care about this particular method call. I'm willing to write a bit more code to make the warning go away, but not willing to force all the data into memory as an array. Any suggestions?

by agent-j at October 30, 2014 07:58 PM




Specification for a Functional Reactive Programming language

I am looking at messing around with creating a functional reactive framework at some point. I have read quite a lot about it and seen a few examples but I wanted to get a clear idea of what this framework would HAVE to do to be considered an FRP extension/dsl. I'm not really concerned with implementation problems or specifics etc but more as to what would be desired in a perfect world situation.

What would be the key operations and qualities of an ideal functional reactive programming language?

by seadowg at October 30, 2014 07:21 PM


Confusion in 2012 paper by Austrin and Håstad regarding hardness of approximating GLST

The paper in question is "On the Usefulness of Predicates", Per Austrin, Johan Håstad (arXiv:1204.5662 [cs.CC]).

On page 13, Example 8.2 they define a predicate $P$ which is $GLST$ with an additional accepting predicate of all $1$'s. The claim is that this predicate can be shown approximation resistant with Theorem 8.3, which requires that $P$ accept all strings $x_1 x_2 x_3 x_4$ such that $\prod_1 ^3 x_i = -1$ and $x_3 = -x_4$.

In particular, $P$ should accept $(1,1,-1,1)$ but the definition of $GLST$ provided requires that $x_2 \ne x_4$.

by Mark at October 30, 2014 07:14 PM


factorial function for Church numerals

I'm trying to implement the factorial lambda expression as described in the book Lambda-calculus, Combinators and Functional Programming

The way it's described there is :

fact = (Y)λf.λn.(((is-zero)n)one)((multiply)n)(f)(predecessor)n
Y = λy.(λx.(y)(x)x)λx.(y)(x)x


(x)y is equivalent to (x y) and
(x)(y)z is equivalent to (x (y x)) and
λx.x is equivalent to (fn [x] x)

and is-zero, one, multiply and predecessor are defined for the standard church numerals. Actual definitions here.

I translated that to the following

(defn Y-mine [y]        ;  (defn Y-rosetta [y]              
  ((fn [x] (y (x x)))   ;    ((fn [f] (f f))                
    (fn [x]             ;     (fn [f]                       
      (y                ;       (y (fn [& args]             
        (x x)))))       ;            (apply (f f) args))))))


(def fac-mine                                ; (def fac-rosetta
  (fn [f]                                    ;      (fn [f]
    (fn [n]                                  ;        (fn [n]
      (is-zero n                             ;          (if (zero? n)
        one                                  ;            1
        (multiply n (f (predecessor n))))))) ;            (* n (f (dec n)))))))

The commented out versions are the equivalent fac and Y functions from Rosetta code.

Question 1:

I understand from reading up elsewhere that the Y-rosetta β-reduces to Y-mine. In which case why is it preferable to use that one over the other?

Question 2:

Even if I use Y-rosetta. I get a StackOverflowError when I try

((Y-rosetta fac-mine) two)


((Y-rosetta fac-rosetta) 2)

works fine.

Where is the unguarded recursion happening?

I suspect it's something to do with how the if form works in clojure that's not completely equivalent to my is-zero implementation. But I haven't been able to find the error myself.



Taking into consideration @amalloy's answer, I changed fac-mine slightly to take lazy arguments. I'm not very familiar with clojure so, this is probably not the right way to do it. But, basically, I made is-zero take anonymous zero argument functions and evaluate whatever it returns.

(def lazy-one (fn [] one))
(defn lazy-next-term [n f]
  (fn []
    (multiply n (f (predecessor n)))))

(def fac-mine                       
  (fn [f]                           
    (fn [n]                         
      ((is-zero n                   
        (lazy-next-term n f))))))

I now get an error saying:

=> ((Y-rosetta fac-mine) two)
ArityException Wrong number of args (1) passed to: core$lazy-next-term$fn  clojure.lang.AFn.throwArity (

Which seems really strange considering that lazy-next-term is always called with n and f

by rjsvaljean at October 30, 2014 07:12 PM



Explanation of Summations for Algorithm Analysis

I do not have a background in Computer Science, work as a Software Engineer, and am attending college for my Master's degree in Computer Science. I have a data structures and algorithms course that I am taking currently with the "Introduction to Algorithms" text book by Cormen, Leiserson, Rivest, and Stein (CLRS). This is not one of my homework problems, but rather extra effort for me to try and understand algorithm analysis as it relates to using summations.

For instance, consider the example of $T(n) = T(n-2) + n^2$. An answer I saw floating around had $\sum\limits_{i=0}^{n/2} (n-2i)^2$. I can understand how to get the $(n-2i)^2$ part, but am not sure how to get the upper and lower boundary conditions, as well as what to do after this point. I guess the total answer for this is $\Theta(n^3)$, but am not making the connection between the summation and the final answer.

I have had calculus in the past, and do remember some of the series chapter that dealt with harmonic, Taylor, geometric, telescoping, and power series. But as it relates to CS, I'm not quite sure where it is going.

So, my questions are:

  1. Why are the lower and upper bounds 0 and n/2, respectively?
  2. What do I do with the summation notation to get a final answer?

I'm sure this is easily answered and that I'm just overlooking something. I know that a series that looks like the harmonic series, which is $\sum_{i=1}^n (1/i)$, will give $\ln(n) + O(1)$. But most of it I don't understand how to get the values associated with Big-O, Omega, or Theta.

I appreciate the help, guidance, and any examples/tutorials you can point me to.

by Kinetic Arc at October 30, 2014 07:02 PM


Converting Future[SomeObject] into json

I'm using scala and writing my domain objects to json. I use Play's Json Combinators like this:

implicit def opensHighlights: Writes[Option[OpensHighlights]] =
    (__ \ 'header).write[String] and
    (__ \ 'topDeviceForOpens).write[String] and
    (__ \ 'percentage).write[String] and
    (__ \ 'percentageOf).write[String])(opensMaybe => {
      val header = Messages("email.summary.highlights.device.opens")
      val percentageOf = Messages("email.summary.highlights.ofAll.opens")
      opensMaybe match {
        case Some(opens) => (
          Percentage(opens.opensOnThisDevice, opens.totalOpens).stringValue(),
        case None => (header, NotApplicable, "0.00", percentageOf)

I'm using this writer in a larger writer:

implicit def summaryHighlightsWrites: Writes[SummaryHighlights] = {
        (__ \ "google").write[Either[GoogleError, GoogleHighlights]] and
        (__ \ "dateWithHighestClickToOpenRate").write[Option[DateHighlights]] and
        (__ \ "subjectLine").write[Option[SubjectLineHighlights]] and
        (__ \ "location").write[Option[LocationHighlights]] and
        (__ \ "link").write[Option[LinkHighlights]] and
        (__ \ "deviceForOpens").write[Option[OpensHighlights]] and
        (__ \ "deviceForClicks").write[Option[ClicksHighlights]])(summary => {
          val result = for {
            google <-
            dateRange <- summary.dateRange
            subjectLine <- summary.subjectLine
            location <- summary.location
            link <-
            opensDevice <- summary.opensDevice
            clicksDevice <- summary.clicksDevice
          } yield (google, dateRange, subjectLine, location, link, opensDevice, clicksDevice)

          Await.result(result, 10 seconds)

And here is the SummaryHighlights class:

case class SummaryHighlights(
  google: Future[Either[GoogleError, GoogleHighlights]],
  dateRange: Future[Option[DateHighlights]],
  subjectLine: Future[Option[SubjectLineHighlights]],
  location: Future[Option[LocationHighlights]],
  link: Future[Option[LinkHighlights]],
  opensDevice: Future[Option[OpensHighlights]],
  clicksDevice: Future[Option[ClicksHighlights]])

I need these fields to each be a Future because they have independent sources and can fail/succeed independently.

I want to remove that explicit await. I want to move the await on the future from summaryHighlightsWrites to some other piece of code that calls this writer. Like a Play controller.

Any help? Thanks

by sebi at October 30, 2014 07:00 PM



Worst case complexity of this summation?

$$\sum_{x=0}^{⌊n/5⌋} x$$

n can be any natural number

by user140243 at October 30, 2014 06:53 PM


equivalent of newtype deriving in ScalaZ

Is there an equivalent of Haskell's newtype .... deriving feature ( as described in this video lecture in the 36th minute ) in ScalaZ ?

enter image description here

by jhegedus at October 30, 2014 06:53 PM



Quantum Artificial Intelligence

Would anyone be able to point me in the direction of some good scholarly articles on Quantum Artificial intelligence. I have to write a research paper on the topic and I'm having trouble coming up with possible subtopics.

by ray smith at October 30, 2014 06:22 PM


Is there a way to get Helm to work with environment variables?

Specifically when I am opening a file. Every once in a while I will have to open a file that is "far" away but close to an EV.

Thanks in advance!

submitted by excitedaboutemacs
[link] [6 comments]

October 30, 2014 06:19 PM


Are there any open source implementation of Arima Forecasting in either Java or Scala.?

Are there any open source implementation of Arima Forecasting in either Java or Scala. ?

by Vikram Garg at October 30, 2014 06:05 PM





How can I make a function that execute another function at most N times in Clojure?

First of All, I have a Mysql table like this:

create table t (id int(11) PRIMARY KEY unsigned NOT NULL AUTO_INCREMENT, name varchar(20), age int(10));

I define a funtion that will create a row in t:

(require '[honeysql.core :as sql])

(defn do-something []
    (sql/query {:insert-into  :t
                :values [{:name "name1" :age 10}]})
    (> 3 (rand-int 5)))

And now I want to run this function until it return true but at most N times.

This take-timescode is wrong because repeat will eval the do-something function one time and then structure the lazy sequence.

(defn take-times []
   (some true? (repeat 5 (do-something))))

This take-times2 will eval the do-something 5 times no matter what the do-something return.

(defn take-times2 []
    (some true? (for [i (range 5)]

What should I do if i do not use recursion function and macro?

by savior at October 30, 2014 05:59 PM


Given two sequences of the same size, how many longest common subsequences can they have?

For simplicity assume that both have the same size N.

the lengh of this subsequence can be at most N, so maybe it's

max(C(N,1), C(N,2), ... , C(N,N))?

by jsguy at October 30, 2014 05:53 PM

JMF framework Video Tuotorial [on hold]

I need a fully complete video tutorial for JMF.

I came across some but they are not comprehensive.

by someone at October 30, 2014 05:44 PM


Running tests on Intellij: Class not found

I'm evaluating IntelliJ (13.0.2 133.696) and cannot get jUnit tests to run from within the IDE.

My project is a multi module gradle project and uses scala.

Test class is located under src/test/scala/xxx/xxxxx/xxx/xxxx/xxxxx and everytime i try to run from IDE i get the same error:

Class not found: ""

Test class is nothing fancy, simple jUnit test:

@ContextConfiguration(classes = Array(classOf[DataConfig], classOf[SettingsConfig]))
class AccountRepositoryTest extends AssertionsForJUnit {

I've found a related question Cannot run Junit tests from IDEA 13.0 IDE for imported gradle projects , but the provided fix (upgrade to 13.0.2) does not work.

I've even tried upgrading to the latest EAP, still the same issue.

by gerasalus at October 30, 2014 05:40 PM



Apache Spark Throws java.lang.IllegalStateException: unread block data

What we are doing is:

  1. Installing Spark 0.9.1 according to the documentation on the website, along with CDH4 (and another cluster with CDH5) distros of hadoop/hdfs.
  2. Building a fat jar with a Spark app with sbt then trying to run it on the cluster

I've also included code snippets, and sbt deps at the bottom.

When I've Googled this, there seems to be two somewhat vague responses: a) Mismatching spark versions on nodes/user code b) Need to add more jars to the SparkConf

Now I know that (b) is not the problem having successfully run the same code on other clusters while only including one jar (it's a fat jar).

But I have no idea how to check for (a) - it appears Spark doesn't have any version checks or anything - it would be nice if it checked versions and threw a "mismatching version exception: you have user code using version X and node Y has version Z".

I would be very grateful for advice on this. I've submitted a bug report, because there has to be something wrong with the Spark documentation because I've seen two independent sysadms get the exact same problem with different versions of CDH on different clusters.

The exception:

Exception in thread "main" org.apache.spark.SparkException: Job aborted: Task 0.0:1 failed 32 times (most recent failure: Exception failure: java.lang.IllegalStateException: unread block data)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1020)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1018)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:604)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:604)
    at scala.Option.foreach(Option.scala:236)
    at org.apache.spark.scheduler.DAGScheduler.processEvent(DAGScheduler.scala:604)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$start$1$$anon$2$$anonfun$receive$1.applyOrElse(DAGScheduler.scala:190)
    at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(
14/05/16 18:05:31 INFO scheduler.TaskSetManager: Loss was due to java.lang.IllegalStateException: unread block data [duplicate 59]

My code snippet:

val conf = new SparkConf()

println("count = " + new SparkContext(conf).textFile(someHdfsPath).count())

My SBT dependencies:

// relevant
"org.apache.spark" % "spark-core_2.10" % "0.9.1",
"org.apache.hadoop" % "hadoop-client" % "2.3.0-mr1-cdh5.0.0",

// standard, probably unrelated
"com.github.seratch" %% "awscala" % "[0.2,)",
"org.scalacheck" %% "scalacheck" % "1.10.1" % "test",
"org.specs2" %% "specs2" % "1.14" % "test",
"org.scala-lang" % "scala-reflect" % "2.10.3",
"org.scalaz" %% "scalaz-core" % "7.0.5",
"net.minidev" % "json-smart" % "1.2"

by samthebest at October 30, 2014 05:40 PM

Scala - toList vs. result on ListBuffer?

The documentation for ListBuffers offer two methods that convert the ListBuffer into a List: result and toList.

result says it produces a collection from the added elements and that the contents are undefined afterward.

toList seems to instead make a constant-time lazy copy of the contents of the buffer (and presumably leaves the buffer intact).

If toList is constant time, when would we ever prefer result? And also am I understanding this correctly that toList will preserve the buffer's contents?

by Kvass at October 30, 2014 05:36 PM

Planet Theory

Metrics in Academics

Congratulations to the San Francisco Giants, winning the World Series last night. In honor of their victory let's talk metrics. Baseball has truly embraced metrics as evidenced in the book and movie Moneyball about focusing on statistics to choose which players to trade for. This year we saw a dramatic increase in the infield shift, the process of moving the infielders to different locations for each batter based on where they hit the ball, all based on statistics.

Metrics work in baseball because we do have lots of statistics, but also an objective goal of winning games and ultimately the World Series. You can use machine learning techniques to predict the effects of certain players and positions and the metrics can drive your decisions.

In the academic world we certainly have our own statistics, publications counts and citations, grant income, teaching evaluation scores, sizes of classes and majors, number of faculty and much more. We certainly draw useful information from these values and they feed into the decisions of hiring and promotion and evaluation of departments and disciplines. But I don't like making decisions solely based on metrics, because we don't have an objective outcome.

What does it mean to be a great computer scientist? It's not just a number, not necessarily the person with a large number of citations or a high h-index, or the one who brings in huge grants, or the one with high teaching scores, or whose students gets high paying jobs. It's a much more subjective measure, the person who has a great impact. in the many various ways one can have an impact. It's why faculty applications require recommendation letters. It's why we have faculty recruiting and P&T committees, instead of just punching in a formula. It's why we have outside review committees that review departments and degrees, and peer review of grant proposals.

As you might have guessed this post is motivated by attempts to rank departments based on metrics, such as described in the controversial guest post last week or by Mitzenmacher. There are so many rankings based on metrics, you just need to find one that makes you look good. But metric-based rankings have many problems, most importantly they can't capture the subjective measure of greatness and people will disagree on which metric to use. If a ranking takes hold, you may optimize to the metric instead of to the real goals, a bad allocation of resources.

I prefer the US News & World report approach to ranking CS Departments, which are based heavily on surveys filled out by department and graduate committee chairs. For the subareas, it would be better to have, for example, theory people rank the theory groups but I still prefer the subjective approach.

In the end, the value of a program is its reputation, for a strong reputation is what attracts faculty and students. Reputation-based rankings can best capture the relative strengths of academic departments in what really matters.

by Lance Fortnow ( at October 30, 2014 05:35 PM


Error context in error handling in Scala

Suppose I need to call a remote JSON/HTTP service. I make a JSON request, send it by HTTP to the server, and receive and parse the JSON response.

Suppose I have data type MyErrorfor errors and all my functions return Either[MyError, R]

type Result[A] = Either[MyError, A]    

def makeJsonRequest(requestData: RequestData): Result[String] = ...

def invoke(url: URL, jsonRequest: String): Result[String] = ...

def parseJsonResponse(jsonResponse: String): Result[ResponseData] = ...

I can combine them to write a new function:

def invokeService(url: URL, requestData: RequestData) Result[ResponseData] = for {
   jsonRequest <- makeJsonRequest(requestData).right
   jsonResponse <- invoke(url, req).right 
   responseData <- parseJsonResponse(jsonResponse).right
} yield responseData

Now what if parseJsonResponse fails ?

I get the error but I need also the whole context. That is I need url, requestData, and jsonRequest. How would you suggest me do it ?

by Michael at October 30, 2014 05:32 PM


Dave Winer

Developing better developers

About universities and open source projects, and why they go together.

  1. We want to teach technology in university.

  2. So far this has meant teaching programming basics. Which is good, everyone needs to know how to write a little code. It's like teaching chemistry to doctors.

  3. But there's so much more to technology. There's a whole spectrum of activities needed to make software (the code) become useful and responsible to humanity.

  4. There's nowhere to go to learn how to create a standard. Or how to write a great bug report. Or how to explain stuff to users, and feed back what we learn into the design of the product.

  5. How about the full spectrum of possible products? What recipes haven't we tried?

  6. We can even teach how to be creative. There are processes for this. Software design isn't different from any other kind of design.

  7. University should not only be about student projects. We should give students experience with the real thing, production software, used by actual people. When you make changes you can break users. Let's teach the next generation of developers how not to do that. Again, there are techniques and methods for this. We've been around this block.

University has a role to play in software. These are our most long-lived institutions. People should come in and out of university all through their lives. The projects we work on when we're students can be the ones we continue to work on in our careers, when we take a sabbatical and when we retire. We should be constantly sharing and recycling the knowledge we gain.

Education has been struggling to find a role in technology, but to me, the role is very clear. Teaching, through practice, and research -- developing new knowledge.

Every university should host an open source project. It should be a process that lasts decades, spans generations. The goal is two-fold: Add to our technology, and to develop better developers.

October 30, 2014 05:28 PM


Pseudorandom generators indistinguishable by uniform deterministic adversaries

I've seen pseudorandom generators defined for nonuniform efficient adversaries, or uniform probabilistic efficient adversaries. (For example, a monograph Pseudorandomness by Vadhan (here's its draft does that.)

I believe that it's natural to think about pseudorandom generators indistinguishable by uniform deterministic efficient adversaries. Has that notion studied before? Does it have any significance to some degree?

EDIT: I was mistaken in that Vadhan defined pseudorandomness against uniform probabilistic adversaries: he did define indistinguishability by uniform probabilistic algorithms, but he neither defined, nor stated facts about, the corresponding notion of pseudorandomness. My interest mostly concerned with uniform adversaries and my original question was about deterministic vs. probabilistic.

by Pteromys at October 30, 2014 05:26 PM


What's the best rpc solutions for clojure?

I would like to write some business logic in clojure.The client is java or php.

In clojures there is weak type systems.But in java it is different.

What is the best way to do to wirte logic clojure and the client is java?

by user2219372 at October 30, 2014 05:24 PM



Calibration of a GBM - what should dt be?

I have a time series of daily data that I want to calibrate GBM parameters $\mu$ and $\sigma$ to. Using the discretized solution

$$ S_{t_{i+1}} = S_{t_i}\exp\left(\left(\mu - \frac{\sigma^2}{2}\right)\Delta t + \sigma \sqrt{\Delta{}}Z_{i+1}\right), $$ calibrating the parameters $\mu$ and $\sigma$ to a given time series with $n$ values turns out to be simply computing

$$ \sigma = \frac{std(R)}{\sqrt{\Delta t}}, \qquad \mu = \frac{\mathbb{E}[R]}{t} + \frac{\sigma^2}{2}, $$

where $R$ is a vector of log returns with components $R_{i+1} = \log S_{t_{i+1}} / S_{t_i}$, $1 \leq i \leq n-1$. The term $std(R)$ denotes the standard deviation of $R$.

Now, the time step $\Delta t = t_{i+1} - t_i$ is supposed to be the length of time between values in the series. Recall the closed-form solution to a GBM evaluated at "final" time $T$ is $$ S_T = S_0\exp\left(\left(\mu - \frac{\sigma^2}{2}\right)T + \sigma W(T)\right). $$ So, if I have a time series history of daily prices spanning exactly one year (say 28 Oct 2013 - 28 Oct 2014), what should $T$ and $\Delta t$ be? In addition, $n=253$ in my series, even though the dates cover 365 days.

Some results: using natural gas futures prices with dates given above.

$T = 1$ and $\Delta t = 1/365$, I get $\sigma = 0.32$ and $\mu = 0.07$.

$T = 1$ and $\Delta t = 1/253$, I get $\sigma = 0.27$ and $\mu = 0.05$.

$T = 365$ and $\Delta t = 1$, I get $\sigma = 0.02$ and $\mu = 0.0002$.

$T = 253$ and $\Delta t = 1$, I get $\sigma = 0.02$ and $\mu = 0.0002$ (same as before).

The first two seem more reasonable for my time series. Any thoughts?

by bcf at October 30, 2014 05:20 PM


Mobaxterm 7.3 + emacs 24.4.1

Mobaxterm 7.3 includes out of the box emacs 24.4.1 under windows


apt-get install emacs-X11; emacs-X11 
submitted by csantosbu
[link] [comment]

October 30, 2014 05:04 PM


What Languages/Skills/etc. I Should Pick Up on the Side?

I'm a freshman Computer Science major and I'm taking a class in C++. What specific systems/skills/programming languages/etc. should I learn on the side that compliment what I'm learning in class?

submitted by NorbertJr
[link] [3 comments]

October 30, 2014 05:04 PM

Computer vision question.

I do quarterly ai challenges with a few colleagues of mine. Fall's challenge was to make an extremely parallel architecture for a needs based utility sdk that could handle 100k+ agents (I beat them badly and even had time to make a gui editor).

Winter's challenge is to create a computer vision application that can first a) recognize a single picture of a duck (we'll know exactly what picture it is 2 hours before testing day at the end of December) and b) be able to recognize a torn up version of the print out.

The first half of this problem is quite easy, as I can just make a framework to recognize the single image, and feed in the file a few minutes before (since we'll be provided with both a print out and a png copy of the image 2 hours before). I was considering applying an edge detection to find the print out first, and then running some convulation to see if the picture matches what the system is expecting.

But I don't know how to do the second half. I just don't know how to approach it. All pieces of the torn print out will be visible within the applications input frame, but will be scattered around and most certainly out of order.

Anyone have any ideas on how to approach the second half? (Sorry if my terminology sucks. I've never done computer vision besides a 3d scanner for a spring break project).

I have 1 month to make my implementation and I can't use already existing apis (so no opencv)

submitted by FerretDude
[link] [1 comment]

October 30, 2014 05:03 PM


How to keep a groupedby list sorted in Play Framework 2 templates

I've got a list of complex objects that I want to display, grouped by one of its attribute in a Play 2 template.

I managed to do it :

@measures.groupBy(_.question.category).map {
    case (category, items) => {
         // Category stuff
         @for(item <- items) {
             // List of items

The problem is that the list was sorted in my Java controller, but the keyset of the map that I create is not sorted anymore (I would like to sort the key set using something like _.question.category.order).

Is there a way to have a sorted map on this attribute?

Thanks !

by cpoissonnier at October 30, 2014 05:01 PM


SVN which src branch when on 10.1-RC3?

Hey I wish to use ezjail and need to populate /usr/src, which branch should I check out when I am on 10.1-RC3?

Do I need to change anything for RC4 and RELEASE when they come out?

submitted by Penetal
[link] [4 comments]

October 30, 2014 04:59 PM



What did “The Art of Computer Programming” look like before TeX?

Knuth developed TeX in response to the technology used for typesetting the first volume of TAoCP being no longer available and all the replacements producing shitty quality.

I'd like to know what this original edition of the first volume looked like. Are there any scans or photos available? I want this to be able to understand the improvements TeX brought to scientific typesetting.

submitted by FUZxxl
[link] [29 comments]

October 30, 2014 04:34 PM


Writing parallel code for apache Spark

Is there a standard set of rules I should follow to ensure that Scala code written for Spark will be run in parallel ?

I find myself writing Spark code which include calls to functions like map & filter which I think will be run in parallel/distributed. But really I don't know how to test if these functions are run parallel/distributed. Is there texts available which explains this , specifically for Spark, or generic text that can be applied to Spark ?

The two separate answers for this questions : How to transform Scala nested map operation to Scala Spark operation? . One answer claims the other answer is not run in parallel. But I'm not sure why to favor one implementation instead of the other.

by blue-sky at October 30, 2014 04:32 PM


req-package on emacs 24.4

Hello community. I just migrated my based config on new emacs release 24.4 . It completely working without any noticeable issues. Req-package is extensions management system with dependencies resolution. It feels and looks like use-package, and oriented to performance and simplicity.

submitted by edvorg
[link] [5 comments]

October 30, 2014 04:25 PM

ignorant question I don't know where else to ask, sorry!

So, I am trying to learn to use emacs. I entered emacs on Terminal on my mac and have been going through the tutorial, but while reading online wanted to try the minor mode "visual line mode", however the version that comes up on my terminal automatically is 22.1.1 and doesn't have it. I downloaded 24.4 from, which I can open and use through my applications folder, but I was wondering how I can replace the version of emacs that comes up in Terminal when I type "emacs" with the version I downloaded?

I realize this might not even be specifically an emacs question, more a basic computer-type question, but I'm not experienced in these things and haven't found the right google search term to get an answer...

submitted by oops-its-not
[link] [5 comments]

October 30, 2014 04:05 PM


Can I assign a function in php

I want to know if I can assign a function.

I have a function call mpurl($s, $pagevar, $pages), this function call occurs many times in my code. now I want to add more cases for this function. like:

if(strpos($s, 'sign') !== false) {
    mpurl($s, $pagevar, $pages);
} else {
    my_mpurl($s, $pagevar, $pages);

I don't want to use this if-else in every invoke of function mpurl, so I want to change the meaning of function mpurl globally.

So here the question comes, Can I write like this:

if(strpos($s, 'sign') === false) {
    mpurl = my_mpurl;
mpurl($s, $pagevar, $pages);

or how can I implement this purpose? Thanks.

by xcaptain at October 30, 2014 04:02 PM

Planet Clojure

Exploring programming in Thamil (not English) through Clojure

Or: A clear example of what macros can do


I started working on a library call clj-thamil that I envision as a general-purpose library for Thamil language computing (ex: mobile & web input method), but a slight excursion in that work has led me to some very deep, intriguing ideas — some of which are technical, and some of which are socio-cultural. But they all fit together in my mind — Clojure, macros, opportunity and diversity (in computing), and the non English-speaking world.

I think that the implications are things that we should all think about. But if nothing else, hopefully you can read this account and understand something about macros — the kind of power they uniquely provide and at least good one use case where they are necessary.

Technical Aspects

How does one even begin??

I tried starting on this Thamil language project a year ago, but I immediately shelved it and left it alone for a large majority of that time. Why? I couldn’t find an editor that would support programming and typing of Thamil characters properly at the same time.

(FYI: The standard spelling is Tamil since British colonial times, but it is pronounced “Thamil”.)

I’m using Mac OS X, which has been supported Unicode well. Thamil, like other South & Southeast Asian languages, are set up in Unicode so that most of their letters [human language elements] require more than 2 characters [computer memory storage type]. Character is not synonymous with letter. For example, the letter கி in Unicode is the combination of the characters க + ி. But the rednering of the character ி is not an actual letter in the Thamil language. Also, both characters have to be treated together as a unit by the OS as well as the applications rendering the text — basically, the stack between storage and user interface — for க + ி to be recognized as being side-by-side and converted into a different shape, கி. Many Mac-native applications and editors like TextEdit handle this by default. Many programming-specific editors are cross-platform and/or non-native, so even their ports to Mac OS X don’t use the OS-support required for proper rendering. Neither Emacs for OS X, Eclipse, IntelliJ, jEdit, or a couple of other programming text editors “worked” – got OS support to combine characters. I basically gave up, but 9 months later, I tried Aquamacs on a whim, and it worked!

Java, Unicode, and Clojure

Java was designed to support Unicode from the beginning. And by that, they mean that instead of a character being an 8-bit ASCII element, characters in Java are 16-bits as defined by the original Unicode spec. Since Clojure emits byte code that runs on the JVM, it also supports Unicode by default. What that means is that you can start using symbols (‘variable names’) where the characters come from ranges designated for other languages without problems. So the following works fine:

(def π 3.14159)

From functions to macros

Clojure, like any language that supports a functional programming paradigm, has functions as first-class values. The interesting part is that we can store first-class values, since we can take any function and create a new binding (different ‘name’) whose value is now equal to the original function.

(def ιитєяρσѕє interpose)
(ιитєяρσѕє "," ["one", "two", "three"])

So now we can ‘translate’ function names, even if superficially. So how far can we go? The core library of Clojure operations come from special forms, functions, and macros. Special forms and macros can’t resolve to values, though, so if we can find a different way to “translate” them, we can use that to pull off a fairly extensive translation of Clojure from English to an entirely different human language (aka “natural language”).

What are macros?

In essence, a macro is special type of function where the input is some block of code, and the output is a block of code. As a result, macros are run in a special way — they are run on the code blocks before the contents of that code block get evaluated.

This enables macros to abstract out code reptition in ways that regular functions can’t. Basically, if you see any code repetition whatsoever, and if that can’t be helped by better code design and refactoring the repetitious code into a new function, then a macro will be your answer. My favorite example is the with-open macro, which gracefully handles try-catch-finally blocks for I/O objects with minimal code. Doto and its ‘fancier’ cousins, the threading macros -> (“thread first”) and ->> (“thread last”), are also good examples.

Translating macros and special forms using macros

Macros operate code at a ‘higher’ level than your regular function — we’re looking at the input code blocks to the macro as a bunch of shapes that we manipulate using the macro. Basically, we’re looking at the text of the code and treating it as data to operate on, before we take the result and then evaluate it like regular code.

So at this level on which macros operate, we can do the following to pull off our ‘translation’ idea for the special forms and macros: create a macro that takes whatever was given to it and pass it along verbatim to some other special form/macro.

As an example, if I take the Thamil word for ‘if’ – ‘எனில்’, then I want to create a macro where whatever I pass to ‘எனில்’ — (எனில் . .. …) — gets passed verbatim to ‘if’ — (if . .. …). And it turns out to be simple:

(defmacro எனில்
  [& body]
  `(if ~@body))

The code says to take the code block passed to ‘எனில்’, package them up into an array of shapes of code called ‘body’, and then unwrap the code shapes into a call to the ‘if’ function.

So we’re done! Right? All we have to do is just list out all of the functions, macros, and special forms to translate in this manner, and we will be done:

(def take எடு)
(def drop விடு)
(defmacro எனில்
  [& body]
  `(if ~@body))
(defmacro வரையறு
  [& body]
  `(def ~@body))

Macros, macros, everywhere!

That seems tedious. Inefficient. There is a lot of repetitive code here (the “def”, the “defmacro”, the shape of the defmacro definition, etc.). And we can’t really write a function to refactor out the repetitive code. But I just said that this is the kind of case that a macro can solve.

Once you strip out the repetitive code, all you are left with is:

take எடு
drop விடு
if எனில்
def வரையறு

This looks like a couple of maps, which makes sense. We’re associating an English word to a corresponding Thamil one. We need to represent the words as symbols so that the Thamil words don’t get evaluated. Putting a single quote (‘) in front of the words convert them into their symbol forms:

{'take 'எடு
 'drop 'விடு
{'if 'எனில்
 'def 'வரையறு

I’ll fast-forward through the details and say that you can see the final macros that take a map of symbols (symbol of the English name mapping to the symbol of the Thamil name). And you can see the progression of steps that it to get there in the linked slides.

The final results — programming in Thamil

And here is a namespace of functions that are written in Thamil that do basic natural language operations (pluralizing a noun, adding noun case suffixes). The pluralizing function looks like this:

(வரையறு-செயல்கூறு பன்மை
  "ஒரு சொல்லை அதன் பன்மை வடிவத்தில் அக்குதல்
  takes a word and pluralizes it"
  (வைத்துக்கொள் [எழுத்துகள் (சரம்->எழுத்துகள் சொல்)]

     ;; (fmt/seq-prefix? (புரட்டு சொல்) (புரட்டு "கள்"))
     (பின்னொட்டா? சொல் "கள்")

     (= "ம்" (கடைசி எழுத்துகள்))
     (செயல்படுத்து சரம் (தொடு (கடைசியின்றி எழுத்துகள்) ["ங்கள்"]))

     (மற்றும் (= 1 (எண்ணு எழுத்துகள்))
            (நெடிலா? சொல்))
     (சரம் சொல் "க்கள்")

     (மற்றும் (= 2 (எண்ணு எழுத்துகள்))
            (ஒவ்வொன்றுமா? அடையாளம் (விவரி குறிலா? எழுத்துகள்)))
     (சரம் சொல் "க்கள்")

     (மற்றும் (= 2 (எண்ணு எழுத்துகள்))
            (குறிலா? (முதல் எழுத்துகள்))
            (= "ல்" (இரண்டாம் எழுத்துகள்)))
     (சரம் (முதல் எழுத்துகள்) "ற்கள்")

     (மற்றும் (= 2 (எண்ணு எழுத்துகள்))
            (குறிலா? (முதல் எழுத்துகள்))
            (= "ள்" (இரண்டாம் எழுத்துகள்)))
     (சரம் (முதல் எழுத்துகள்) "ட்கள்")

     (சரம் சொல் "கள்"))))

Commentary on macros and state

Because macros aren’t values like numbers, strings, and functions are, you can’t compose them. Once you use a macro, you might end up having to use more macros around it (ex: you can’t pass it around to existing higher-order functions). Our use case is an example of that. So use macros sparingly, as a last resort. Prefer using functions — they compose and can be passed as arguments to other functions. This why I have a separate macro for translating function names, even though the macro for translating the names of macros and special forms alone is sufficient.

While the benefit of only needing a map of symbols can be viewed as simplicity or elegance, it is the result of an instinct about programming in general imparted by Clojure’s design to try to isolate state and operate on it with a toolset of composable functions. It’s a mindset that keeps paying dividends.

Technical implications

Since the only Thamil-specific information required to effect the “translation” is stored in just 2 maps, does this mean that we can use the same strategy for any other language? Sure! Why not? As far as Java is concerned, all of the characters it sees when it parses code are 16-bit Unicode characters/codepoints. It doesn’t know which range the codepoints fall in, or even how they have to be handled by the OS and applications to appear properly. So, nothing is Thamil-specific.

Also, it’s important enough to be worth pointing out, even if it is obvious to you, that none of the macro code here required modifying Clojure as a language, or the Clojure parser or compiler. This is all “user-level” code. And yet, we’ve created what is truly an entirely new programming language. I can create code that is entirely in Thamil without knowing that Clojure / Lisp exists underneath. Cascalog is another favorite example of mine of what creating a new language on top of Lisp looks like that is written using “user-level” Lisp code, even though it doesn’t quite syntactically resemble the core Clojure / Lisp that it is based on. The power to shape your language to suit your needs, even if it starts looking like another language, is the power that macros give you. And this is why Paul Graham’s book about Lisp is called On Lisp — the title emphasizes that Lisp lets you write new languages on top of Lisp.

Technical gaps and future possibilities

The method for translation is not a true translation, as you can tell. It’s cosmetic. So there are a few places where our abstraction fails:

  • Clojure is based on Java (it runs on the Java Virtual Machine).
    Since Java is written entirely in English, any Java interop from
    Clojure will require English. Also, stack traces and error messages
    will all be in English
  • The translation of functions is done by assigning existing Clojure
    functions to Thamil sybols because functions are values. This means
    evaluating the value of a Thamil symbol referring to a function
    will use the name of its value — the (English) name of the Clojure function
  • The namespace bootstrapping problem — in order to use Thamil names
    in a namespace, you need to ‘import’ (require, in Clojure parlance)
    the namespace that contains the translations (here, clj-thamil.core).
    But until those translations are imported (‘required’), they aren’t available, so
    the require statement has to be in English. If namespace
    sounds like a weird concept, think of it like a package, module, or
  • Things like literals (true/false, special keywords in Clojure macros
    like :as, :refer, :keys) would have to be translated at read time. Numbers represented in other languages’ numerals would need their own logic to interpret. The boolean values true and false are tricky since they represent the Java values, so if they are returned by a Clojure function, how could you change that behavior? Change the Clojure function to return a different, equivalent value? Then create your own implementations of translations of true?, false?, nil?, and if to use your new booleans (and redefine if to point to your translated if)? At which point, you would need to re-evaluate all of the functions/macros that use if (ex: when, if-not, and) before re-evaluating your translations

Some of these issues might be solved by modifying the Clojure reader, which some projects already do. Another idea is to localize the source code for Clojure itself somehow. I would consider exploring how far the first idea can take you. The second approach seems like it would be near-comprehensive, but also a lot of difficult work that risks obsoletion when the language changes. Fortunately, Clojure as a language is “stable” as I see it — the design is carefully thought out and controlled in a consistent and cohesive way. Changes are usually additions to the language or implementation details, making most code forward-compatible (including all of the code used here).

Social and Cultural Importance

There are a lot of implications of creating the ability to program in another human language which, I think, in the balance is a net positive for the world. The most obvious point is that English is not the primary language for most of the world.

For all the kids in the non-English speaking world, especially the ones in non-Western / non-developed countries, learning to program means having to learn and think in a second language in order to learn programming and write code in a programming language. Even in a place like Southern India, which is a hotspot for programming work, this creates a challenge for kids who do not enjoy the privilege of access to good English education but who still want to program (and get lucrative jobs). The divide is clear; even the state government of Tamil Nadu, where Thamil speakers live, which also creates the Thamil language textbooks and distributes them for free to all grade school students, uses screenshots of the default English interface of basic computer software as part of its computer/technology books (at least when I last checked). Of course, the hands-on classes would be more of the same. Students who aren’t fluent in English by their teenage years manage by memorizing which clicks of which icons and UI elements do what they need. The presence of an error dialog box may tell you that something is wrong, but being able to read the text of the error message, comprehend it (along with the jargon), and take actions accordingly is a different task altogether.

The task of learning programming is hard enough. It is a technical area that requires learning a separate vocabulary. It is an abstract subject that is not necessarily easy to explain. Having programming in someone’s first language allows that person to deal with only the concepts of programming when learning it. And through different human languages, we may open up different approaches to programming than with what we get through just English. What does it mean to write code that mimics human language when your language isn’t subject-verb-object (SVO), but instead subject-object-verb (SOV), as is most common in the world’s languages. Does OOP make more intuitive sense to people who speak SOV languages? What about Clojure/Lisp? In my limited experience of programming in Thamil in Clojure, it feels pretty similar. Human languages that start with a verb are rare, so in one way, you could say that Lisp is equally strange to most people. But the fact that there is less syntax to learn, and that the rules of the language are few and simple, and the fact that the code that you write fits the contours of the problem domain that you’re trying to solve, contribute to the experience of Clojure in Thamil being similar to Clojure in English.

The tech industry, as epitomized by Silicon Valley, has been recently contemplating its lack of diversity — an overwhelming number are white and/or male in leading companies and startups. I’m happy to see the small, growing, wonderful efforts to address the inequity in various programming circles. But the clj-thamil project has helped me take a step back and think about addressing the segment of programmers who not only lack the privileges of others in an American context, but to address those who do not enjoy such privileges in a global context — their language, their region’s wealth, and their personal wealth.

The privileges that we enjoy in the English-speaking world should not enable us to rationalize away these differences and privileges, though. Some people might think that, perhaps, the world would be better off if everyone were to speak one language. But suppose we did. Which language would be that one language? Chinese, because it is spoken in the most populous country? Or English, because it is spoken by more people and in disproportionately wealthy countries (a legacy of unfair colonial conquests)? Esperanto, an artificial language that inherits many aspects from European languages? There is no way to decide which language is universal without establishing more inequity. Also, selecting a universal language would erase cultural and geographic knowledge (and diversity in lifestyles!). And barring these concerns, if there were magically some agreeable universal language, and given a medium that could globally connect the world instantly (ex: the internet), that universal language would still fragment along geographic and socio-economic lines because humans naturally maintain differences to mark these distinctions.

Along the lines of what Bret Victor said in Inventing on Principle, I hope that we can properly enable programming for all the people around the world in the language that think most easily in, since that would be a form of expression that we are opening up and allowing to flourish.

by Elango at October 30, 2014 04:01 PM


How to write efficient type bounded code if the types are unrelated in Scala

I want to improve the following Cassandra related Scala code. I have two unrelated user defined types which are actually in Java source files (leaving out the details).

public class Blob { .. }
public class Meta { .. }

So here is how I use them currently from Scala:

private val blobMapper: Mapper[Blob] = mappingManager.mapper(classOf[Blob])
private val metaMapper: Mapper[Meta] = mappingManager.mapper(classOf[Meta])

def save(entity: Object) = {
  entity match {
    case blob: Blob => blobMapper.saveAsync(blob)
    case meta: Meta => metaMapper.saveAsync(meta)
    case _ => // exception

While this works, how can you avoid the following problems

  1. repetition when adding new user defined type classes like Blob or Meta
  2. pattern matching repetition when adding new methods like save
  3. having Object as parameter type

by reikje at October 30, 2014 03:55 PM

Call to zmq_socket returns NULL and errno 14 (Bad address)

I am working on a test client/server application using ZeroMQ x86 for connectivity on Windows 8, VS 2012. Unfortunately, I have some trouble with initializing a connection. For now I simply copied (and slightly extended with error reporting) the example hwserver.c and use following code to initialize:

void* ctx;
void* rsp;
ctx = zmq_ctx_new();
DWORD dwErr = zmq_errno();
printf("Creating Context - %s\r\n", zmq_strerror(dwErr));

rsp = zmq_socket(ctx, ZMQ_REP);
dwErr = zmq_errno();
printf("Creating Socket - %s\r\n", zmq_strerror(dwErr));

This fails at the call to zmq_socket which returns rsp == NULL and dwErr==14: Bad address. Given that this is nearly identical to the example code and yet it fails I'm out of answers. Maybe someone has an idea what is wrong with that call. Maybe it is a compatibility issue with using 32bit binaries?

by antipattern at October 30, 2014 03:55 PM

Ansible file edition with other file

A lot of my routers have template ACL. So I first make config file for each one, f.e. with name kh.tb05

- name: Populate configs (with object groups)
    dest={{ config_dir }}/{{ item.obj }}.txt
  with_items: kha_routers_obj

But some have additional rules, so I want to place them in populated files. f.e. kh.tb05 is populated config file, addkhtb05 is file with some additional rules I tried this config, but it doesn't work.

- name: read file
  command: cat {{ item }}
  register: contents_{{ item }}
  with_fileglob: {{ config_dir }}/addkh.*

- name: add conf
   dest={{ item }}
   insertbefore='deny ip any any log'
   line=contents_{{ item }}.stdout
   with_fileglob: {{ config_dir }}/kh.*


- name: read file
  command: cat {{ item }}
  register: contents
  with_fileglob: /etc/ansible/kha/roles/ACL/templates/addkh.*
- debug: contents

Now it plays, but output is strange.

TASK: [ACL | debug contents.stdout] ******************************************* 
ok: [localhost] => {
    "msg": "Hello world!"

by Coul at October 30, 2014 03:54 PM



Python list extension after map

this is my first venture into functional programming. I wrote some code yesterday and today I am trying to re-write it as functional... just for fun.

Code basically looks up values in a couple of lists and massages the values, here is yesterday's code (bad I know, but it was one of those slow brain days):

def prep_ratios(self):
    if 'ratios' in
        out = []
        for ratio in['ratios']: 
            new = []
            f1 = []
            f2 = []
            if ratio[0] in self.fieldmap:
                f1 = []
                for calc in self.fieldmap[ratio[0]]:
            if ratio[1] in self.fieldmap:
                f2 = []
                for calc in self.fieldmap[ratio[1]]:
            if len(f1) == len(f2):
                [out.append(x) for x in [[x,y] for x,y in zip(f1,f2)]]['ratios'] = out

Incoming ratios lists like

[['session_count_EventSearch_i', 'session_count_EventProductView_i'], ['session_count_EventSearch_i', 'session_count_EventAddToCart_i']]

fieldmap has a list of calculations that were performed on each field, lets say it looks like: ['avg','min']

The output of this code is:

[['avg_session_count_EventSearch_i', 'avg_session_count_EventProductView_i'], ['min_session_count_EventSearch_i', 'min_session_count_EventProductView_i'], ['avg_session_count_EventSearch_i', 'avg_session_count_EventAddToCart_i'], ['min_session_count_EventSearch_i', 'min_session_count_EventAddToCart_i']]

It is exactly what I am looking for.

Now to functional fun, here is what I have now:

    def prep_ratios(self):
        def prep_ratio(ratio):
            _combine = lambda x,y: x+'_'+y
            _check = lambda x: x in self.fieldmap
            _get_calcs = lambda x: list(self.fieldmap[x])
            f1 = [_combine(calc,ratio[0]) for calc in _get_calcs(ratio[0])]
            f2 = [_combine(calc,ratio[1]) for calc in _get_calcs(ratio[1])]
            if len(f1) == len(f2): return [ [x,y] for x,y in zip(f1,f2)]

        if 'ratios' in
  ['ratios'] = map(prep_ratio,['ratios'])

The output looks like:

[[['avg_session_count_EventSearch_i', 'avg_session_count_EventProductView_i'], ['min_session_count_EventSearch_i', 'min_session_count_EventProductView_i']], [['avg_session_count_EventSearch_i', 'avg_session_count_EventAddToCart_i'], ['min_session_count_EventSearch_i', 'min_session_count_EventAddToCart_i']]]

It looks like each ratio combination is wrapped in it's own list from map. I need to get rid of these to get the same output I was getting yesterday. I've tried a few of the following:

  • create a new array and appending in it (in the if statement)
  • list.extend method on data coming from map

I am looking for feedback on two points: 1. the functional code and how it can be improved. I really hate having two lines in there for f1 and f2, what is a better way to do this while still maintaining some readability? 2. More importantly, how can I get the data to look correctly.

Thanks in advance


Looks like I have #2 figured out. I was able to get it right with the following. Any feedback on #1 would still be helpful.

    if 'ratios' in
        import itertools['ratios'] = list(itertools.chain.from_iterable(map(prep_ratio,['ratios'])))

by user2630270 at October 30, 2014 03:33 PM


Constructing inequivalent binary matrices

I am trying to construct all inequivalent $8\times 8$ matrices (or $n\times n$ if you wish) with elements 0 or 1. The operation that gives equivalent matrices is the simultaneous exchange of the i and j row AND the i and j column. eg. for $1\leftrightarrow2$ \begin{equation} \left( \begin{array}{ccc} 0 & 0 & 0 \\ 0 & 1 & 1 \\ 1 & 0 & 0 \end{array} \right) \sim \left( \begin{array}{ccc} 1 & 0 & 1 \\ 0 & 0 & 0 \\ 0 & 1 & 0 \end{array} \right) \end{equation}

Eventually, I will also need to count how many equivalent matrices there are within each class but I think Polya's counting theorem can do that. For now I just need an algoritmic way of constructing one matrix in each inequivalence class. Any ideas?

by Heterotic at October 30, 2014 03:31 PM

A basic question about approximation algorithms for the Traveling Salesman Problem

Approximating the traveling salesman problem (TSP) within a constant factor $k$ is hard. The standard proof shows that the existence of such an approximation allows the Hamilton Cycle problem to be decided. This standard proof is discussed in many places including accepted answers on cs.stackexchange here and here.

Briefly, the standard proof for $k=2$ works as follows. Suppose a graph $G$ with $n$ vertices is given as input to the Hamilton Cycle problem. We create a weighted complete graph $H$ with the same $n$ vertices as follows: The edges in $G$ are given weight 1, and the added edges are given a large weight, say $n+2$. Notice that every Hamilton cycle in $H$ either has total weight $n$, or at least $(n-1) + (n+2) = 2n+1$. This gap allows a 2-approximation algorithm for TSP to answer the decision problem. More specifically, if $G$ has a Hamilton Cycle, then the 2-approximation algorithm must return the value $n$ since the other Hamilton cycles have total weight $>= 2n+1$. On the other hand, if $G$ does not have a Hamilton cycle, then the 2-approximation algorithm will return a value larger than $n$.

I'm uncertain about something even more basic. What would a 2-approximation algorithm return if we did not add the edges of weight $2n+1$? In other words, suppose we take graph $G$ and create a weighted graph $G_1$ by assigning weight 1 to each edge in $G$, and we do not add any new edges. In this case, the Hamilton cycles in $G_1$ (if any) all have total weight $n$, and they all correspond to Hamilton cycles in $G$. There is no gap in the total weights of the Hamilton cycles.

What exactly would the 2-approximation algorithm return if $G$ has a Hamilton cycle? And what if $G$ does not have a Hamilton cycle?

by Aaron at October 30, 2014 03:29 PM


How to match against a set of possible values?

I have a set and I want to match another variable against any one of its elements. I know I can do this manually like this:

fruits = Set("a", "b", "c", "d")
toMatch = ("a", "fruit")
toMatch match {
   case (("a" | "b" | "c" | "d", irrelevant)) => true

But is there any way to use fruits in the match statement, so I don't have to manually expand it

EDIT: I currently am using an if condition to do this, I was wondering if there is some syntactic sugar I can use to do it inline

fruits = Set("a", "b", "c", "d")
toMatch = ("a", "fruit")
toMatch match {
   case ((label, irrelevant)) if fruits.contains(label) => true

If there is no other answer, Ill mark the first person who responded with if as the solution ! Sorry about the lack of clarity there.

EDIT2: The reason for this if you are wondering is

fruits = Set("a", "b", "c", "d")
vegetables = Set("d", "e", "f")
toMatch = ("a", "fruit")
toMatch match {
   case ((label, "fruit")) if fruits.contains(label) => true
   case ((label, "vegetable")) if vegetables.contains(label) => true

I would like to combine the two cases so i have one condition for each return type

by Nicomoto at October 30, 2014 03:28 PM



Could core.async have implemented its functions in terms of sequences?

Rich Hickey's Strange Loop transducers presentation tells us that there are two implementations of map in Clojure 1.6, one for sequences in clojure.core and one for channels in core.async.

enter image description here

Now we know that in 1.7 we have transducers, for which a foldr (reduce) function is returned from higher order functions like map and filter when given a function but not a collection.

What I'm trying to articulate and failing, is why core.async functions can't return a sequence, or be Seq-like. I have a feeling that the 'interfaces' (protocols) are different but I can't see how.

Surely if you're taking the first item off a channel then you can represent that as taking the first item off a sequence?

My question is: Could core.async have implemented its functions in terms of sequences?

by hawkeye at October 30, 2014 03:21 PM



Name of special class of k-partite graphs

Consider directed graphs $G=(V,E)$ such that $V$ can be partitioned into sets $V_1, V_2, \ldots, V_k$. For the edges we have that if $v \in V_i$ and $(v,w) \in E$ then $w \in V_{i+1}$ for all $1 \le i < k$. These graphs are specializations of $k$-partite graphs. Do they have a special name?

by Rupert Jones at October 30, 2014 03:20 PM


Why are aggregate and fold two different APIs in Spark?

When using the Scala standard lib, I can do somthing like this:

scala> val scalaList = List(1,2,3)
scalaList: List[Int] = List(1, 2, 3)

scala> scalaList.foldLeft(0)((acc,n)=>acc+n)
res0: Int = 6

Making one Int out of many Ints.

And I can do something like this:

scala> scalaList.foldLeft("")((acc,n)=>acc+n.toString)
res1: String = 123

Making one String out of many Ints.

So, foldLeft could be either homogeneous or heterogeneous, whichever we want, it's in one API.

While in Spark, if I want one Int out of many Ints, I can do this:

scala> val rdd = sc.parallelize(List(1,2,3))
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[1] at parallelize at <console>:12
scala> rdd.fold(0)((acc,n)=>acc+n)
res1: Int = 6

The fold API is similar to foldLeft, but it is only homogeneous, a RDD[Int] can only produce Int with fold.

There is a aggregate API in spark too:

scala> rdd.aggregate("")((acc,n)=>acc+n.toString, (s1,s2)=>s1+s2)
res11: String = 132

It is heterogeneous, a RDD[Int] can produce a String now.

So, why are fold and aggregate implemented as two different APIs in Spark?

Why are they not designed like foldLeft that could be both homogeneous and heterogeneous?

(I am very new to Spark, please excuse me if this is a silly question.)

by CuiPengFei at October 30, 2014 03:15 PM

Portland Pattern Repository



How to make functional style absolutelly identical to imperative style in Scala [duplicate]

This question already has an answer here:

In a book on Scala programming I came around this example. They say that this example of imperative code

def printArgs(args: Array[String]): Unit = {
    var i = 0
    while (i < args.length) {
        i += 1

Can be "translated" into functional style like that:

def printArgs(args: Array[String]): Unit = {
    for (arg <args)

But to me these two codes are not absolutelly identical, the second one completelly missing variable "i". And so if I want to print it along with value of string I cannot do it in the second example, or can I? How?

by henry at October 30, 2014 02:50 PM


Algorithm books on a range of topics

I've been tasked with building a library of books on algorithms for our small company (about 15 people). The budget is more than 5k, but certainly less than 10k, so I can buy a fair number of books. All people here have at least a Bachelor's degree in CS or a closely related field, so while I will get some basic textbook like Cormen, I'm more interested in good books on advanced topics. (I will get Knuth's 4 volumes, BTW.)

Some list of topics would be:

  • Sorting algorithms

  • Graph algorithms

  • String algorithms

  • Randomized algorithms

  • Distributed algorithms

  • Combinatorial algorithms

  • etc.

Essentially I'm looking for good recommendations on books about major topics within CS related to algorithms and data structures. Especially stuff that goes beyond what's typically covered in algorithm and data structure classes as part of a Bachelor's degree at a good school. I know the question is quite fuzzy, since I'm looking for generically useful material. The software we develop is mostly system level stuff handling large amounts of data.

The ideal would also be to find anything that would cover fairly recent cool data structures and algorithms, which most people might not have heard about.

EDIT: Here are some preliminary books that I think I should get:

  • Introduction to Algorithms by Cormen et al.

  • Algorithm Design by Kleinberg, Tardos

  • The Art of Computer Programming Vol 1-4 by Knuth

  • Approximation Algorithms by Vazirani

  • The Design of Approximation Algorithms by Williamson, Shmoys

  • Randomized Algorithms by Motwani, Raghavan

  • Introduction to the Theory of Computation by Sipser

  • Computational Complexity by Arora, Barak

  • Computers and Intractability by Garey and Johnson

  • Combinatorial Optimization by Schrijver

A few other books my colleagues wanted that deal with techniques and algorithms for language design, compilers and formal methods are:

  • Types and Programming Languages by Pierce

  • Principles of Model Checking by Baier, Katoen

  • Compilers: Principles, Techniques, and Tools by Aho, Lam, Sethi, Ullman

  • The Compiler Design Handbook: Optimizations and Machine Code Generation, Second Edition by Srikant, Shankar

  • The Garbage Collection Handbook: The Art of Automatic Memory Management by Jones, Hosking, Moss

by mtf at October 30, 2014 02:48 PM


Does the Y Combinator Contradict the Haskell-Howard Correspondence? [migrated]

The Y combinator has the type $(a \rightarrow a) \rightarrow a$. By the Haskell-Howard Correspondence, because the type $(a \rightarrow a) \rightarrow a$ is inhabited, it must correspond to a true theorem. However $a \rightarrow a$ is always true, so it appears as if the Y combinator's type corresponds to the theorem $a$, which is not always true. How can this be?

by Joshua at October 30, 2014 02:48 PM


ZFS: subfilesystem is not accessible via NFS

I create two zfs datasets as described in ZFS Sharing Inheritance:

zfs create -o mountpoint=/mnt/tank/ds tank/ds
zfs create tank/ds/ds1
zfs set sharenfs=on tank/ds

After that I'm able to access tank/ds, but can not access tank/ds/ds1. When I try to save data to tank/ds/ds1, it is saved to tank/ds instead. To be more exact, copied files appear in folder /mnt/tank/ds/ds1/ but I can only access them after I unmount tank/ds/ds1:

# zfs unmount tank/ds/ds1
# ls -l /mnt/tank/ds/ds1/
total 1
-rw-r--r--  1 root  wheel  0 Oct 30 17:08 test.tmp

I wonder if it is by design or there is a way to make dataset tank/ds/ds1 accessible via NFS?

by Anthony Ananich at October 30, 2014 02:47 PM


Karnaugh Map is this the simplest solution possible?

I'm learning to use a Karnaugh map but I'm not sure if I obtained the simplest expression possible. Have a look at this example

Truth table (where A B C yield F):

A   B   C   |   F
0   0   0   |   1
0   0   1   |   0
0   1   0   |   1
1   0   0   |   0
0   1   1   |   1
1   0   1   |   1
1   1   0   |   0
1   1   1   |   1

Karnaugh Map:


    00  01  11  10
0|  (1  1)  0   (1)
1|  0   (1  1)  (1)

And this yields $\bar{C}\bar{A}+CB+A\bar{B}$ Is there any simpler way of choosing the 1s from the map and getting a simpler result?

by user23169 at October 30, 2014 02:45 PM


Are cyclic dependencies supported in SBT?

I have seen previous discussions, but thought I'd re-ask the question for newer versions of SBT.

Is there a way to create a cyclical dependency in SBT 0.13+?


by gregsilin at October 30, 2014 02:42 PM


Resolving ambiguity in dangling else

Initially the ambiguous grammar is as follows (with some cropped production rules):

<STMT>         -->  <IF-THEN> | <IF-THEN-ELSE>
<IF-THEN>      -->  if condition then <STMT>
<IF-THEN-ELSE> -->  if condition then <STMT> else <STMT>

To resolve the ambiguity, the previous grammar is transformed into the following one in the book I study:

<STMT>           -->  <IF-THEN> | <IF-THEN-ELSE>
<IF-THEN>        -->  if condition then <STMT>
<IF-THEN-ELSE>   -->  if condition then <E-STMT> else <STMT>
<E-STMT>         -->  <E-IF-THEN-ELSE>
<E-IF-THEN-ELSE> -->  if condition then <E-STMT> else <E-STMT>

What I cannot get is that why the following rule is necessary.

<E-IF-THEN-ELSE> -->  if condition then <E-STMT> else <E-STMT>

As far as I think about, the update on <IF-THEN-ELSE> and introduction of <E-STMT> is sufficient to resolve the ambiguity. Could you please give an example refuting my argument? Thanks..

by suat at October 30, 2014 02:35 PM


What common task can I try to tackle/solve with various programming languages?

I am looking for a problem to solve which enables me to try out various programming languages and having a means to compare the pros & cons of them. This is similar to ToDoMVC which does the same task with various MVC frameworks. Lisp 99 Problems looks pretty good but am looking for interesting suggestions that might take on a real world problem and be a touch less mathematical. Failing that, I might tackle a few of these anyway.

Programming languages I'm looking to try this with:

  • Haskell
  • Clojure
  • Javascript
  • Erlang
  • Scala
  • Elm

by preeve at October 30, 2014 02:34 PM


What's the intuition behind DTS(duration times spread) in fixed income?

I am having some difficulty grasping the concept of using DTS to measure credit risk. In the equity world, one typical measure of risk is beta, which is quite well-defined as the exposure to a common market factor, say S&P 500. But in credit market, it's not clear to me what the analogous common market factor would be. The original DTS paper says that DTS is the exposure to the relative spread change. However, the relative spread change can be calculated for each bond. Therefore, I have failed to see how DTS can be an exposure to some common factor. Can anyone explain exactly what DTS is measuring?

by ezbentley at October 30, 2014 02:30 PM


Use maven plugin on SBT

Is there anyway to use a maven plugin on SBT?

by Luiz Guilherme at October 30, 2014 02:25 PM


Proving approximation ratio

We recently in computational complexity class dealt with approximation algorithms and I was wondering how one would prove a heuristic having a certain ratio in regards to the optimal version. Looking at some notes it seems that one has to relate the approximation to an intermediate algorithm, right?

by Marorin at October 30, 2014 02:19 PM




Authorized_key module for initial connection

Is the authorized_key module of ansible, can be used to copy the ssh keys of host to a new remote user?

by vaja at October 30, 2014 02:10 PM

Dave Winer

Which Internet do you want?

I would like to be part of the Internet where people say what they think, no matter how different, or offensive it may be to some people. Why? I care what people think.

I find I can learn from lots of points of views, even ones I don't support, although having ideas I object to repeated over and over ad nauseam is not what I have in mind.

The actual Internet I use is becoming a monoculture, where only certain points of view are tolerated. More and more so every year. This totally sucks.

If you force people to stop expressing ideas you don't like, that doesn't mean they go away. And if you can't hear what other people think, you can never change your mind. And you're probably wrong about a few things too, as we all are.

October 30, 2014 02:03 PM

Portland Pattern Repository


Java interoperability woes with Scala generics and boxing

Suppose I've got this Scala trait:

trait UnitThingy {
  def x(): Unit

Providing a Java implementation is easy enough:

import scala.runtime.BoxedUnit;

public class JUnitThingy implements UnitThingy {
  public void x() {

Now let's start with a generic trait:

trait Foo[A] {
  def x(): A

trait Bar extends Foo[Unit]

The approach above won't work, since the unit x returns is now boxed, but the workaround is easy enough:

import scala.runtime.BoxedUnit;

public class JBar implements Bar {
  public BoxedUnit x() {
    return BoxedUnit.UNIT;

Now suppose I've got an implementation with x defined on the Scala side:

trait Baz extends Foo[Unit] {
  def x(): Unit = ()

I know I can't see this x from Java, so I define my own:

import scala.runtime.BoxedUnit;

public class JBaz implements Baz {
  public BoxedUnit x() {
    return BoxedUnit.UNIT;

But that blows up:

[error] .../ error: JBaz is not abstract and does not override abstract method x() in Baz
[error] public class JBaz implements Baz {
[error]        ^
[error] /home/travis/tmp/so/js/newsutff/ error: x() in JBaz cannot implement x() in Baz
[error]   public BoxedUnit x() {
[error]                    ^
[error]   return type BoxedUnit is not compatible with void

And if I try the abstract-class-that-delegates-to-super-trait trick:

abstract class Qux extends Baz {
  override def x() = super.x()

And then:

public class JQux extends Qux {}

It's even worse:

[error] /home/travis/tmp/so/js/newsutff/ error: JQux is not abstract and does not override abstract method x() in Foo
[error] public class JQux extends Qux {}
[error]        ^

(Note that this definition of JQux would work just fine if Baz didn't extend Foo[Unit].)

If you look at what javap says about Qux, it's just weird:

public abstract class Qux implements Baz {
  public void x();
  public java.lang.Object x();
  public Qux();

I think the problems here with both Baz and Qux have to be scalac bugs, but is there a workaround? I don't really care about the Baz part, but is there any way I can inherit from Qux in Java?

by Travis Brown at October 30, 2014 01:59 PM


Relating univalence for a theory of cateogries to the skeleon concept

Say I work in homotopy type theory and my sole objects of study are conventional categories.

Equivalences here are given by functors $F:{\bf D}\longrightarrow{\bf C}$ and $G:{\bf C}\longrightarrow{\bf D}$ which provide equivalences of categories ${\bf C} \simeq {\bf D}$. There exist natural isomorphisms $\alpha:\mathrm{nat}(FG,1_{\bf C})$ and $\beta:\mathrm{nat}(GF,1_{\bf D})$ so that this functor and "inverse" functor are transformed to unit functor.

Now univalence relates equivalences to the identity type ${\bf C}={\bf D}$ of the intentional type theory I have chosen to talk about categories. Since I only deal with categories and those are equivalent if they have isomorphic skeletons, I wonder if I can express the univalence axiom in terms of passing to the skeleton of the categories.

Or, otherwise, can I define the identity type, i.e. the syntactic expression ${\bf C}={\bf D}:=\dots$ in a way which essentially says "there is a skeleton (or isomorphi) and ${\bf C}$ and ${\bf D}$ are both equivalent to it."?

(In the above I try to interpret the type theory in terms of concepts which are easier to define - the category theoretical notions. I think about this because morally, it seems to me that the axiom "corrects" intentional type theory by hard-coding the principle of equivalence, which is already a natural part of the formulation of category theoretical statements, e.g. specifying objects only in terms universal properties.)

by NikolajK at October 30, 2014 01:58 PM


Intellij IDEA: shortcut to switch between code and Scala console (Mac)?

One of the most frequent things I used to do in Emacs is to have two buffers open: one for Scala code, and one for the Scala Console/REPL, and send code from the code buffer to the console, and rapidly switch between the two buffers using my own defined keyboard shortcuts.

What are the keyboard short-cuts to do this in Intellij IDEA with Scala? None of the docs seem to have exactly what I want, which is:

  1. have both the Scala console (REPL) and the file (code) windows open
  2. switch back and forth between code and console.

If there's no pre-defined keyboard shortcut, then is there an action for this, so I can define my own shortcut? I know there's "Jump to last window" (F12) and "Restore default layout" (Shift F12); these almost get me what I want: F12 takes me to the Scala console, and Shift F12 takes me to the code but closes the Scala console, and I want the Scala console to remain open.

More generally, it would be great to have Shortcuts/Actions to simply cycle through the open windows/components in the IDE, without having to use the switcher (Ctrl-TAB).

by Prasad Chalasani at October 30, 2014 01:56 PM



objects of same class have different methods signature

look at the following snippet:

class C
val c1 = new C { def m1 = "c1 has m1" }
val c2 = new C { def m2 = "c2 has m2" }


run it in the REPL, then you know what I mean.

My limited java knowledge tells me that in java, objects of same class will have the same methods signagure, and as far as OO concerned, there is no much difference between java and scala under the hood. (correct me, if I'm wrong), so I'm very surprised to see the snippet is sound scala code.

so why?

by Haiyuan Zhang at October 30, 2014 01:55 PM

Merging maps without overriding keys

I have a clojure function that returns a sequence of 1-key maps. I want to merge these maps into one map; however, if there are maps with the same key, I don't want to overwrite the values, only to combine them into a vector. merge seems to overwrite, and merge-with seems to seriously distort the type.

I have:

({:foo "hello"}
 {:bar "world"} 
 {:baz "!!!"}
 {:ball {:a "abc", :b "123"}}
 {:ball {:a "def", :b "456"}}
 {:ball {:a "ghi", :b "789"}})

I'd like:

{:foo "hello"
 :bar "world"
 :baz "!!!"
 :ball [{:a "abc", :b "123"} {:a "def", :b "456"} {:a "ghi", :b "789"}]}


by Cameron Cook at October 30, 2014 01:51 PM

Planet FreeBSD

PC-BSD Forums Now Support Tapatalk

The PC-BSD forums are now accessible on Tapatalk. Tapatalk is a free forum app for your smartphone that lets you share photos, post, and reply to discussions easily on-the-go.

Tapatalk can be downloaded from here. Once installed,  search “” from the Tapatalk Explore tab. Be sure to  add the “PC-BSD Forums” search result, not just “PC-BSD”.

by dru at October 30, 2014 01:48 PM

Dave Winer

Throwback Thursday

This picture was taken at Davos on January 27, 2000.

A picture named daveAtDavos.jpg

It's notable because I was wearing a suit, as is the custom, in a ski resort in the Swiss Alps.

I know it doesn't make a whole lot of sense to me either.

PS: A blog post I wrote from Davos, two days later. Pretty sure I wasn't wearing a suit when I wrote that piece.

PPS: I'm not wearing a suit now.

October 30, 2014 01:43 PM

Planet FreeBSD


Several members of the PC-BSD and FreeNAS teams will be attending MeetBSD, to be held on November 1 and 2 in San Jose, CA. There’s some great presentations lined up for this event and registration is only $75.

As usual, we’ll be giving out PC-BSD and FreeNAS media and swag. There will also be a FreeBSD Foundation booth that will accept donations to the Foundation. The BSDA certification exam will be held at 18:00 on November 1.

by dru at October 30, 2014 01:41 PM



FreeBSD: pkg_delete -a by accident. How to restore?

I have accidently ran "pkg_delete -a" on FreeBSD 9.1 . Is there anyway to restore this operation or revert backwards?

And if not, is there some way to copy the pkg installed on another server? (there are basically 4 servers that are alike they all contain the same packages, this operation only performed on one of them).

by Miki Berkovich at October 30, 2014 01:33 PM

What is going on in the match functionality?

I have a method:

  def replaceSpecialSymbols(str: String): String = str.collect {
    case '/'     => '-'
    case _ => _

Whe I try to build this code, I receive the error message: "error: unbound placeholder parameter case _ => _"

I know that I can use replaceAll. But I want to know what is going on in this case in Scala compiler.

Thank you.

by Nikolay at October 30, 2014 01:33 PM



Platform for Quantitative equity portfolio

What are the most popular platforms used for quantitative equity portfolio management/research?

I've only used Barra so far for their factor models. Is there any specific feature or model you think that'll be really helpful for such a product?

I am looking for a product so that I can focus on idea/strategy generation rather than spending a lot of time in preparing data or other fundamental work.

by hotsource at October 30, 2014 01:30 PM


Convention for marking methods as pure functions in Java

Reading code of some complex application I thought it could often be helpful to recognize if a method has side effects just by looking on its signature. And after inspecting some of such methods I thought it could be nice to mark them as purely functinal or not in order to make life easier for people reading this code in future.

Is there some convention in Java world (javadoc, method naming pattern etc.) which identifies method as pure function?

by ipbd at October 30, 2014 01:13 PM



Is $AlwaysHalt$ recursively enumerable?

I was doing some complexity theory exercices and I came over this one:

$AlwaysHalt = \{R(M) | M$ halts with all $x \in \{0,1\}^*\}$

Is $AlwaysHalt$ recursively enumerable?

I would say YES and construct the following TM that accepts this language:

Enumerate all $x \in \{0,1\}^*$ starting with length 1 and then increasing the length in every iteration and try the current $x$ with our TM $M$. We only care about those machines $M$ that halt with all words (we do not care if we do not halt, it would only mean that $R(M)$ is not in our language) so each $x$ will halt eventually. There is only countably many words $x \in \{0,1\}$ so we can do that.

However, I am not sure if this is a correct proof? Can I enumerate all words from and infinite (but countable) language to show that a TM behaves in some way on every single one of them?

by Smajl at October 30, 2014 01:05 PM



How do ansible host_vars work?

I created a repo to reproduce my scenario.

Essentially we are loading an inventory with our hosts, we can override values per-host via the inventory without issue but would like to try and utilize host_vars.

I'm not 100% clear on how host vars are matched to the host. I read the ansible repo for examples but cannot seem to get it to work as documented so I'm looking for some scrutiny of our setup.

When I run the command ansible-playbook -i ansible.inventory site.yml -clocal in my example repo I expect the host_vars/{{ ansible_hostname }} file to be read and override anything set in the vars but that does not appear to be happening.

Can someone please point me at a working example so I can see where we are going wrong?

by marshall at October 30, 2014 12:35 PM


Inflation-Linked Bonds & Asset Swap Spreads

I am trying to plot the asset swap spreads of government inflation-linked bonds (ILBs) versus the asset swap spread of government nominal (plain-vanilla) reference bonds.

I used the article in the link below:

My questions/concerns:

a.) I have conceptual concerns using the net-proceeds asset swap structure (let me qualify that that by saying, given my understandings). My understanding is that we are trying to solve for the asset swap spread (which is built into the floating leg of the asset swap) which sets:


where Fixed denotes the fixed leg of the swap and Floating the floating leg of the swap. I felt more comfortable with the par asset swap structure - solving for the asset swap spread which sets:


where AIP is the current bond all-in-price. Why I liked this was because if a bond was issued with a high coupon rate (relative to current interest rate environment) but had an all-in-price less than par (100), one would conclude the bond had poorer credit quality (relatively speaking - and just assume there is no liquidity premium, etc.) This was then matched by a larger asset swap spread - ie. as holder of the bond I am compensated more for its inferior credit quality.

But I don't see this mechanism in the net-proceeds asset swap because the all-in-price is not built into the structure (in the par asset swap structure, at initiation you pay par for a bond whose current value is the all in price, while under the net-proceeds structure, you pay the all-in-price (so that (AIP-100) term is not present in the net-proceeds structure as it is in the par-asset swap structure)

b.) Anyway having used the net-proceeds for the ILBs, the graph of the ILB asset swap spread is completely different to the Nominal asset swap spread - the ILB spreads are roughly around double the size of Nominal spreads and the shape of the graph (vs. maturity of the bonds) is erratic and wholly different to the shape of the Nominal curve

Now this may be due to the different credit risk profile of the ILB versus a plain vanilla nominal bond (explained in the article in the link above). But the article fails to cover how to account/compensate for this differing credit structure (so that we could compare the ILB spreads to its reference nominal bond's spread). How would one account for this?

Does anyone have an idea how one should go about this? or more generally to model the asset swap spread for ILBs?

Any help is greatly appreciated

by Nick at October 30, 2014 12:30 PM


Failing to link c code to lapack: undefined reference

I am trying to use lapack functions from C.

Here is some test code, copied from this question

#include <stdlib.h>
#include <stdio.h>
#include <time.h>
#include "clapack.h"
#include "cblas.h"

void invertMatrix(float *a, unsigned int height){
  int info, ipiv[height];
  info = clapack_sgetrf(CblasColMajor, height, height, a, height, ipiv);
  info = clapack_sgetri(CblasColMajor, height, a, height, ipiv);

void displayMatrix(float *a, unsigned int height, unsigned int width)
  int i, j;
  for(i = 0; i < height; i++){
    for(j = 0; j < width; j++)
        printf("%1.3f ", a[height*j + i]);

int main(int argc, char *argv[])
  int i;
  float a[9], b[9], c[9];
  for(i = 0; i < 9; i++)
      a[i] = 1.0f*rand()/RAND_MAX;
      b[i] = a[i];
  displayMatrix(a, 3, 3);
  return 0;

I compile this with gcc:

gcc -o test test.c  \
    -lblas -llapack -lf2c

n.b.: I've tried those libraries in various orders, I've also tried others libs like latlas, lcblas, lgfortran, etc.

The error message is:

/tmp//cc8JMnRT.o: In function `invertMatrix':
test.c:(.text+0x94): undefined reference to `clapack_sgetrf'
test.c:(.text+0xb4): undefined reference to `clapack_sgetri'
collect2: error: ld returned 1 exit status

clapack.h is found and included (installed as part of atlas). clapack.h includes the offending functions --- so how can they not be found?

The symbols are actually in the library libalapack (found using strings). However, adding -lalapack to the gcc command seems to require adding -lcblas (lots of undefined cblas_* references). Installing cblas automatically uninstalls atlas, which removes clapack.h.

So, this feels like some kind of dependency hell.

I am on FreeBSD 10 amd64, all the relevant libraries seem to be installed and on the right paths.

Any help much appreciated.



by Ivan Uemlianin at October 30, 2014 12:27 PM


Planet Theory

Analysis of Boolean Functions book now available for free download

I’m happy to say that I’ve finally gotten things set up so that you can download a PDF of the book. The official web address for this is, or you can click “DOWNLOAD THE PDF” on the blog’s main page.

A small warning: On the download page, I am using a “Google form” to ask for your email address, to which the PDF will be sent. I promise not to send you any other emails; just the one containing the attached PDF. I would also like to warn that I think Google allows at most 100 emails per day, so there is some small chance that we might hit the limit today. If that happens, please try again tomorrow! Finally, the server hosting this website has been a bit flaky lately; I hope it will be fine in the future.

Many thanks to Cambridge University Press for agreeing (several years ago!) to allow the PDF to be freely available. Of course, I’m also very thankful to the heroes who sent in comments and bug-fixes to the blog draft of the book. Please feel free to continue doing so — I’m happy to keep making corrections.

Regarding the differences between the printed book and the electronic version: The printed book is “Version 1.00″ and the PDF is “Version 1.01″. The main difference is that about 40 small typos/bugs have been fixed (nothing major; in almost all cases you can easily guess the correct thing once you notice the mistake). I would also like to add that the numbering of the theorems/definitions/exercises/etc. is identical between the printed book and the PDF, so if you would like to cite something, it doesn’t matter which version you cite. The page numbering is not the same, though, so please try not to cite things by page number.

Once more, thanks again to all the readers of the blog; I hope you enjoy the book!

Added later: We hit 100 emails; however, it appears that the downloads are still working. But in any case, if you have trouble, please try again tomorrow.

by Ryan O'Donnell at October 30, 2014 12:09 PM


Create scala.xml.Elem with specific childs

I'd like to create a new scala.xml.Elem based on a node with a specific list of childs. This would be the return case in a recursive replace function.

val childs: Seq[Node] = update(node.child)
new Elem(node.prefix, node.label, node.attributes, node.scope, true, childs)

This construct gives compiler error:

Error:(50, 11) overloaded method constructor Elem with alternatives:
  (prefix: String,label: String,attributes: scala.xml.MetaData,scope:  scala.xml.NamespaceBinding,child: scala.xml.Node*)scala.xml.Elem <and>
  (prefix: String,label: String,attributes1: scala.xml.MetaData,scope:  scala.xml.NamespaceBinding,minimizeEmpty: Boolean,child: scala.xml.Node*)scala.xml.Elem
  cannot be applied to (String, String, scala.xml.MetaData, scala.xml.NamespaceBinding, Boolean, Seq[scala.xml.Node])
      new Elem(node.prefix, node.label, node.attributes, node.scope, true, childs)

The problem is related to vararg handling, and I can't understand why I have this error. Any idea?

Update I was able to get through the problem with the following ugly construct:

val childs: Seq[Node] = update(node.child)
Elem(node.prefix, node.label, node.attributes, node.scope, true)
  .copy(node.prefix, node.label, node.attributes, node.scope, true, childs)

First create the elem without childs, then copy and add the childs. The copy method definition is without varargs.

by ZsJoska at October 30, 2014 12:07 PM

Using clojure.core/Extend with Prismatic schema I get different behavior of s/protocol in s/validate and in s/with-fn-validation

I'm trying to validate the protocol of an instance of defrecord that I generate dynamically using clojure.core/extend

Below You can see that satisfies returns true and (s/validate (s/protocol ...)) doesn't throw exception, but if I run s/with-fn-validation I get a "(not (satisfies? protocol ..... " schema exception although inside this body I keep getting the same true result for (satisfies? protocol x)

(ns wrapper.issue
  (:require [schema.core :as s]))

(defprotocol Welcome
  (greetings [e] )
  (say_bye [e a b])

(s/defn greetings+ :-  s/Str
  [component :- (s/protocol Welcome)]
  (greetings component))

(defrecord Example []
  (greetings [this] "my example greeting!")
  (say_bye [this a b] (str "saying good bye from " a " to " b))

(defrecord MoreSimpleWrapper [e])

(extend MoreSimpleWrapper
  {:greetings (fn [this]
      (str "wrapping!! " (greetings (:e this)))
   :say_bye (fn  [this a b]
               (str "good bye !"))})

(println (satisfies? Welcome (MoreSimpleWrapper. (Example.))))
(println  (s/validate (s/protocol Welcome) (MoreSimpleWrapper. (Example.))))
;;=>#wrapper.issue.MoreSimpleWrapper{:e #wrapper.issue.Example{}}

  (println (satisfies? Welcome (MoreSimpleWrapper. (Example.))))
  (println  (s/validate (s/protocol Welcome) (MoreSimpleWrapper. (Example.))))
  ;;=>#wrapper.issue.MoreSimpleWrapper{:e #wrapper.issue.Example{}}

  (greetings+ (MoreSimpleWrapper. (Example.))))
;;=>CompilerException clojure.lang.ExceptionInfo: Input to greetings+ does not match schema: [(named (not (satisfies? Welcome a-wrapper.issue.MoreSimpleWrapper)) component)] {:schema [#schema.core.One{:schema (protocol Welcome), :optional? false, :name component}], :value [#wrapper.issue.MoreSimpleWrapper{:e #wrapper.issue.Example{}}], :error [(named (not (satisfies? Welcome a-wrapper.issue.MoreSimpleWrapper)) component)]}, compiling:(/Users/tangrammer/git/olney/wrapper/src/wrapper/issue.clj:39:69)

Here also a gist with the code

by tangrammer at October 30, 2014 12:04 PM



scala - calling methods of objects using for, foreach loops

I have this code:

val br = new ListBuffer[Piece]

  for(i <- 10 to 550 by 65) {
    br += new Brick(Point(x = i, y = 20, w = widthB, h = heighB))

And a method draw(g, color) for Piece class. Now I would like to know how I can call that draw method for each Piece in the ListBuffer. I am trying in this way but I don't get why it's not functional:

br.foreach(x => draw(g, orange))

Thanks for any suggestions what I am doing wrong?

by Val at October 30, 2014 12:01 PM



Proguard and Scala default arguments

I have an Android Scala app that uses SBT + ProGuard for building.

In a library project I have this:


class Columna[T] { ... }

class TablaBase {
     lazy val columnas: List[Columna[_]] = ....

trait Soporte {
    this: TablaBase =>

    def fabricaSoporte(w: Writer, cols: List[Columna[_]] = columnas) {

in my app code, I have this:


object sgiein extends TablaBase with Soporte { .... }    

and when building my project, I get these cryptic errors:

Warning: can't find referenced method 'void es$fcc$bibl$bd$sincr$TablaBaseSincronizada$_setter_$cols_$eq(scala.collection.immutable.List)' in program class$
Warning: can't find referenced method 'scala.collection.immutable.List cols()' in program class$

     Your input classes appear to be inconsistent.
     You may need to recompile the code.

The problem is related with the default value of the argument cols. If I remove that argument, everything builds ok.

I've tried to change the ProGuard options to these with no luck:

-keepclassmembers class* {
    ** *(**);
-keepclassmembers class* {
    ** *(**);

I don't understand why I'm having this problem.

by david.perez at October 30, 2014 11:59 AM

Either for-comprehension different behavior in 2.9 and 2.10

Scala 2.10 seems have updated the for comprehension for Either. In 2.10:

scala> val a = Right(5)
a: scala.util.Right[Nothing,Int] = Right(5)

scala> for {aa: Int <- a.right} yield {aa}
<console>:9: error: type mismatch;
 found   : Int => Int
 required: scala.util.Either[Nothing,Int] => ?
          for {aa: Int <- a.right} yield {aa}

In 2.9.3, the above is okay.

scala> val a = Right(5)
a: Right[Nothing,Int] = Right(5)

scala> for {aa: Int <- a.right} yield {aa}
res0: Product with Serializable with Either[Nothing,Int] = Right(5)

It is very easy to fix by just removing the type for aa in 2.10. But I am wondering why the behavior changes as such between 2.9 and 2.10.

by Jiezhen Yi at October 30, 2014 11:45 AM

Planet Emacsen

Irreal: Compiling aspell with OS X Yosemite

While I was setting up my new machine, I had to rebuild aspell. The last time I did that it built without any problems. This time, despite the fact that it was the same version as before, there were several fatal errors. I asked DuckDuckGo what it knew about the matter and it referred me to this stackoverflow question.

The answer is correct but it's not very clear where you have to apply the fixes. For the record, the error is in interfaces/cc/aspell.h in the section marked “errors” that starts on line 237. Just comment out the entire section with #ifndef __cplusplus#endif as lotsoffreetime suggests in the stackoverflow post.

by jcs at October 30, 2014 11:42 AM


How protect spray application from overloading?

The algorith is clear:

  1. Cleint sent a request to my spay applciation
  2. Spray received a request and see spray loading.
  3. If loading is hight spray return 503. Or start processing request otherwise.

How to managed current spray loading?

Also as I understand spray uses akka internally, which can be extended with adding additional nodes. In that case I should somehow manage nodes loading. How can I do that?

by Cherry at October 30, 2014 11:33 AM



How can I convert BSON (Binary JSON) to human readable data using Clojure?

I was wondering if anybody has a solution or knows of any resources that would allow me to convert BSON into human readable data for use with MapReduce using Cascalog?

by Geem7n at October 30, 2014 11:14 AM

expression evaluator in scala (with maybe placeholders?)

I am reading something like this from my configuration file :

metric1.critical = "<2000 || >20000"
metric1.okay = "=1"
metric1.warning = "<=3000"
metric2.okay = ">0.9 && < 1.1 "
metric3.warning ="( >0.9 && <1.5) || (<500 &&>200)"

and I have a

metric1.value =  //have some value

My aim is to basically evaluate

    if(metric1.value<2000 || metric1.value > 20000)
    else if(metric1.value=1)
    //and so on

I am not really good with regex so I am going to try not to use it. I am coding in Scala and wanted to know if any existing library can help with this. Maybe i need to put placeholders to fill in the blanks and then evaluate the expression? But how do I evaluate the expression most efficiently and with less overhead?

EDIT: In java how we have expression evaluator Libraries i was hoping i could find something similar for my code . Maybe I can add placeholders in the config file like "?" these to substitute my metric1.value (read variables) and then use an evaluator? OR Can someone suggest a good regex for this? Thanks in advance!

by joanOfArc at October 30, 2014 11:02 AM

openjdk 1.8 package for centos 6.5

I want to move our production setup to openjdk 1.8 soon. Currently we're running openjdk 1.7u55 on centos 6.5.

The trouble is that I can't seem to get a straight answer out of google on where to find a yum repository with 1.8. Has a usable rpm been released somewhere? If so, where? If not, is there a rough ETA when this might happen (e.g. centos 7 or epel?). I could probably wait a few weeks but not much longer.

I was able to find some fedora packages at least:

So, this suggest people are working on this at least but I have no idea if these packages are stable with centos (or work at all).

For clarity, I know it is early days with jdk 1.8 and am well aware of the tradeoffs. I'm not looking for build instructions or instruction on how to download Oracle Java from Oracle, which with their lack of a yum repo and license click through is annoying.

Update It seems Epel now has openjdk 1.8.0_25. So this is no longer an issue. Install sudo yum install java-1.8.0-openjdk-devel if you need the jdk instead of the jre.

by Jilles van Gurp at October 30, 2014 10:50 AM

Guava Function in interfaces

I am going through an old code base of Java 6 and I see this in one of the interfaces

public static Function<Model, Map<? extends Class<? extends Feature>, Map<String, String>>> getRequiredFeatures =
        new Function<Model, Map<? extends Class<? extends Feature>, Map<String, String>>>() {
            public Map<? extends Class<? extends Feature>, Map<String, String>> apply(final Model input) {
                return input.getRequiredFeatures();

Besides lots of Generic types, what I didnt understand is what is exactly being done here. Arent we just allowed to declared method signatures in interfaces? SO how does this work. I also see a lot of this in the code which also I dont understand:

public static Function<Model, Set<Model>> unwrap =
        function(FuncitoGuava.<Model, Set<Model>>functionFor(callsTo(Model.class).unwrap()));

This might be a noob question as I am pretty new to FP and Guava in general. So please go easy on this question. Thanks.

by Abhiroop Sarkar at October 30, 2014 10:40 AM


A naive proof strategy about the P versus NP [on hold]

Proving that $$P=NP$$ in in $NP$ and, proving that $$P\neq{NP}$$ would be in $P$.

Does the formulation is acceptable?

Side-precision The proof of $P=NP$ will be so tough to elaborate than it should lies in $NP$. It follows, and easy to derive, that the proof of $P\neq{NP}$ will not cost a full among of computational time such that it would lies in $P$.

by user23077 at October 30, 2014 10:32 AM


Type-safe transducers in Clojure. And Scala. And Haskell.

You may obviously not agree with the entirety of the content in that post, however I find it interesting that somebody wrote this kind of post so I thought I’d share it with you. Also a request, could we have a clojure tag in ?


by irrequietus at October 30, 2014 10:32 AM


Scala Data Type Mismatch Error

object GPACalc {
  def main(args: Array[String]) = {
  val listOfGrades = List(3.00, 2.00, 1.75, 1.25)   

 def CumAve(f: List[Double] => Double, g: List[Double] => Double)(xs: List[Double]) = f(g(xs)) 

 def getGPA(xs: List[Double]) = sumList(xs) / xs.length

 def sumList(ys: List[Double]): Double = {
    if (ys.isEmpty) 0
    else ys.head + sumList(ys.tail)


So I'm getting this type mismatch error on Line 8, and I can't seem to pinpoint what's wrong with it.

by Renz at October 30, 2014 10:23 AM


Option platforms providing eurex products

I search an option platform providing eurex products as eurostoxx 50. Can you advice me some platforms ? Thank you in advance for your answer Julien

by julien at October 30, 2014 10:20 AM


Planet Clojure

Emacs: Down The Rabbit Hole

So I wrote Welcome to The Dark Side: Switching to Emacs in response to a tweet, but as any of my co-workers will attest, it doesn’t take much to get me started on my Emacs workflow.

I feel like a religious convert… I miss those simple, unadorned Vim services but I’m floored by the majesty of the stained glass and altar dressings and ritual of the Church of Emacs.

So before the jump, in the spirit of “I’ll show you mine if you show me yours,” my .emacs.d.

An Unexpected Journey

I lived in my happy little Vim hobbit hole, smoking my pipe and enjoying my brandy. It was not a dirty hole, or a sandy hole, but a hobbit hole, which means comfort.

One day, a wizard visted me.

And that’s when things began to get weird…

Okay, so maybe I didn’t receive a visit from the revenant spirit of John McCarthy, ghost of programming past, present and future. Or maybe I did.

Maybe Paul Graham just convinced me I was coding in Blub, for whatever value of Blub I happened to be using.

See, the thing about Blub is it’s a mutable value. When you’re using C++ and Java comes along, you realize C++ was actually Blub. When you’re using Perl for your day-to-day and discover Python, and then Ruby, you realize that not only was Python Blub, but Perl was an even Blubbier Blub.

Ruby… oh, Ruby. I still love Ruby. But then something happened.

I need to backpedal a bit.

There’s using a language, and then there’s building something in it. I’d played with Scheme (SICP is wonderful), and even Common Lisp, and I knew enough to appreciate the Lisp-nature of Ruby which, when combined with its Smalltalk-nature, I thought made for hte perfect productive language.

But see, I was building things in Ruby while I was playing with Lisp.

Along comes Clojure.

I was working in a pretty isolated programming role that granted me a lot of de facto autonomy. So when I got a request for a new service, I thought “why not Clojure?”

We’re in late 2012 here, so bear with me.

My first Clojure project ran like a champ, was hailed as an unqualified success. Eventually I even blogged about a piece of that project that handled datetimes.

Fast-forward to the present, I’ve written Clojure in Sublime Text, Atom, mostly Vim with the help of some awesome plugins from Tim Pope.

Like I mentioned before, I’ve had a religious hatred for Emacs since the mid-1990s when I entered the *nix world and got involved in USENET.

The war is far from over…

…but, I digress.

I started the Baltimore Clojure Meetup and met more Emacs users than I had in one place in a long time. Again, I dismissed Emacs.

That is, until I found LightTable completely b0rked again and threw up my hands.

Perhaps I shouldn’t have eaten my hands to begin with… sorry, equivocation humor. Can’t resist.

Welcome to Emacs

So yeah, I went over my starter packages in the earlier post, but I didn’t talk about the full experience of discovery I underwent when I fully committed to emacs.

Sure, there’s the whole cider-mode and cider-jack-in and cider-nrepl and even cider-scratch that make LightTable’s inline evaluation modes look like child’s play (no offense to Chris Granger, LightTable is beautiful, I love it, but… y’know, Emacs).

So I did those things, started with Prelude, added all the Clojure fun I could find, and got to work.

I also subscribed to /r/emacs, and did a little reading on the Emacs Wiki.

Have you ever been comfortably reading (or coding) under a tree, and you see a white rabbit in a waistcoat with a pocket-watch run by complaining he’s late?

Thus such adventures begin.


As I fell to the bottom (or so I thought) of the rabbit-hole, I found a bottle of cider labeled Drink Me, and so I drank the cider. Suddenly, I could eval Clojure inline, jump to docstrings, jump to source for a fn, and it was wonderful.

The last time I tried Emacs, I always joked about how I was using Emacs but always edited my .emacs config with Vim.

“Not this time,” I thought, and used Projectile to manage my .emacs.d and edited my user.el in Emacs. Oh, it was better! Then, thought I, I should put my .emacs.d in source control (actually, it was demanded:


But then I realized I was doing the ⌘-Tab to iTerm to run git ci -a (I pity the fool that doesn’t alias common git commoands) in… wait for it… $EDITOR=/usr/bin/vim.

That’s when I found a bit of fairy cake called magit, and I ate a bit of that and my Git workflow was inside of Emacs. Now it was a simple M-x magit-status to view my working tree state, where I could hit s to stage files for commit, and C-c C-c to commit changes, and P P to push.

Oh, it’s beautiful.

Curiouser and Curiouser

Well, if Emacs can handle my Git workflow, what can’t it do, I wondered?

I went a bit mad playing with multiple buffer and frame layouts; on one occasion I opened a shell inside an emacs biffer and launched the command-line version of emacs in a shell inside the windowed version of emacs.

Recursive rabbit holes.

When you’re running the Cocoa-nested version of Emacs (not Aquamacs, fuck that noise, but just GNU Emacs packaged as a .app), you get some suggestions from the menus. Gnus for USENET or email, various games, a calendar…


That’s whan I discovered Org-Mode.

Org-Mode FTW

Org Mode is an Emacs major mode that lets you organize your life. All of it. I’m not even going into detail here, it’s a deep, deep well. You can use it for a TODO list, sync it with your phone, use it to write your Octopress blog.

(Confession: This blog is powered by octopress, and although it’s now written in Emacs, I’ve not gone full crazy and started composing it with Org-Mode)

Twittering-Mode WTF

That’s when I started going down the tunnel of “well, what else can it do?”

And I discovered twittering-mode.

A quick M-x package-install RET twittering-mode puts a Twitter client in your text editor. Like you always needed. M-x twit will jump you right into your Twitter feed, i will enable user icons (yes, user avatars right in goddamn Emacs), and u will jump you to a buffer where you can compose a Tweet and hit C-c C-c to send it.

Playing Games

I’d be remiss if I didn’t mention that M-x package-install RET 2048-mode will install a game of 2048 in Emacs. Because that’s really fucking important, you know?


For good reason, Emacs comes standard with an AI psychotherapist named Eliza.

A quick M-x doctor and you’re in therapy.

Which you’ll probably need.

…and Much, Much More

I’ve barely scratched the surface, but I feel like this post is long enough. There’s so much down here, down the Emacs rabbit hole, that it will probably take me weeks to even catch up to whre I am right now; what I’ve described so far is my first few days with this operating system text editor.

But it’s a fun ride.


Sorry for the Tolkien digression when my dominant allusion was Alice in Wonderland… Emacs is a weird place.

by Jason Lewis at October 30, 2014 10:05 AM


ZeroMQ / 0mq or nanomsg bindings to Kafka?

In Fred Georges talk about microservice architectures, he mentions using Kafka as a high speed bus (he refers to as the rapids) and connecting multiple 0mq instances (refered to as rivers) to it. A slide of this can be seen here.

Can anyone share how this binding might be best implemented?

Also keen to hear how this might be implemented using nanomsg instead of 0mq.

by user1074891 at October 30, 2014 10:00 AM

Portland Pattern Repository


I can't get 4 space indentation to work at all

Currently in the middle of a c++ project where the code I've been given to work with is indented 4 spaces. The problem is that emacs default indentation is 2 spaces, so every attempt to indent a line fails to align properly.

I have tried multiple solutions

*I've gone into the configuration menu and changed indentation there (and saved it) but emacs ignores it (and resets it all upon restart too, which is weird).

*I've tried following multiple guides to how to set up indendation i've found online, but none of them work.

The current contents of my .emacs file is: "(setq-default indent-tabs-mode nil) (setq-default tab-width 4) (setq indent-line-function 'insert-tab)"

Would be really grateful if this could be solved somehow. I've probably spent over 2 hours on this issue alone, including having several other people look at the problem too and they had absolutely no idea why it wasn't working correctly.

It seems that whatever is put in the .emacs file is actually ignored by emacs, but I don't know how this even possible. If I make changes in emacs settings from within the program, those changes affect the .emacs file, so it's not it doesn't know where the file is.

submitted by theKGS
[link] [7 comments]

October 30, 2014 09:58 AM


Expected empirical entropy

I'm thinking about some properties of the empirical entropy for binary strings of length $n$ when the following question crosses my way:

$\underbrace{\large\frac{1}{2^{n}}\normalsize\sum\limits_{w\in\left\{0,1\right\}^{n}}\normalsize nH_{0}(w)}_{\large\#}\;\overset{?}{=}\;n-\varepsilon_{n}\;\;\;$

with $\;\;\lim\limits_{n\rightarrow\infty}\varepsilon_{n}=c\;\;\;$ and $\;\;\;\forall n:\;\varepsilon_{n}>0$

where $c$ is a constant.

Is that equation true? For which function $\varepsilon_{n}$ respectively which constant $c$?

$ $

$n=2\;\;\;\;\;\;\;\rightarrow\;\#=1 $
$n=3\;\;\;\;\;\;\;\rightarrow\;\#\approx 2.066 $
$n=6\;\;\;\;\;\;\;\rightarrow\;\#\approx 5.189 $
$n=100\;\;\;\rightarrow\;\#\approx 99.275 $
$n=5000\;\rightarrow\;\#\approx 4999.278580 $
$n=6000\;\rightarrow\;\#\approx 5999.278592 $

$ $


$ $
$H_{0}(w)$ is the zeroth-order empircal entropy for strings over $\Sigma=\left\{0,1\right\}$:

  • $H_{0}(w)=\frac{|w|_{0}}{n}\log\frac{n}{|w|_{0}}+\frac{n-|w|_{0}}{n}\log\frac{n}{n-|w|_{0}}$

where $|w|_{0}$ is the number of occurences of $0$ in $w\in\Sigma^{n}$.

The term $nH_{0}(w)$ corresponds to the Shannon-entropy of the empirical distribution of binary words with respect to the number of occurences of $0$ respectively $1$ in $w\in\Sigma^{n}$.

More precise:
Let the words in $\left\{0,1\right\}^{n}$ be possible outcomes of a Bernoulli process. If the probability of $0$ is equal to the relative frequency of $0$ in a word $w\in\left\{0,1\right\}^{n}$, then the Shannon-entropy of this Bernoulli process is equal to $nH_{0}(w)$.

At this point, my question should be more reasonable since the first term normalizes the Shannon-entropies for all empirical distributions of words $w\in\left\{0,1\right\}^{n}$.
Intuitively I thought about getting something close to the Shannon-entropy of the uniform distribution of $\left\{0,1\right\}^{n}$, which is $n$.
By computing and observing some values I've got the conjecture above, but I'm not able to prove it or to get the exact term $\varepsilon_{n}$.

It is easy to get the following equalities:

$\large\frac{1}{2^{n}}\normalsize\sum\limits_{w\in\left\{0,1\right\}^{n}}\normalsize nH_{0}(w)\;\;=\large\frac{1}{2^{n}}\normalsize\sum\limits_{w\in\left\{0,1\right\}^{n}}\normalsize |w|_{0}\log\frac{n}{|w|_{0}}+(n-|w|_{0})\log\frac{n}{n-|w|_{0}}$

$=\large\frac{1}{2^{n}}\normalsize\sum\limits_{k=1}^{n-1}$ $n\choose k$ $\left(k\log\frac{n}{k}+(n-k)\log\frac{n}{n-k}\right)$

and it is possible to apply some logarithmic identities but I'm still in a dead point.

(the words $0^{n}$ and $1^{n}$ are ignored, because the Shannon-entropy of their empirical distributions is zero)

Any help is welcome.

by Danny at October 30, 2014 09:52 AM


How do I shutdown Hikari connection pool in clojure ring web app deployed to tomcat in elasticbeanstalk

I've got HikariCP running in my clojure ring app for connection pooling. The problem is I don't know of a good place to close the pool so I'm not. I allow the pool to die when the app does and never explicitly close it. It appears this is leaking connections whenever I redeploy my app to Elastic Beanstalk (which is using Tomcat) but I'm not totally sure. I'm wondering where (if anywhere) is a good place to put app shut down code so I can explicity close my connection pool. FYI, current deployment process is to execute lein ring uberwar and deploy that war via the elasticbeanstalk UI.

by Brad at October 30, 2014 09:51 AM

Fred Wilson


A few days ago, I got an email from a reporter asking me this:

What is needed to help bring Bitcoin security and ease of use to mainstream Bitcoin users?

I was in a hurry, trying to get through my email, and wrote back this:

i think wider use of multisig would be a good thing

Mutisig is a technology that was added to the Bitcoin protocol in 2011 and 2012. This article on Multisig by Vitalik Buterin is a good description of the technology. This is from Vitalik’s article:

In a traditional Bitcoin account, you have Bitcoin addresses, where each address has one associated private key that grants the keyholder full control over the funds. With multisignature addresses, you can have a Bitcoin address with three associated private keys, such that you need any two of them to spend the funds. 

In principle this is a lot like the check-writing policies that many of our portfolio companies have. For checks below some number, say $5000, one signature is fine. But for checks above that number, two signatures are required.

Multisigned transactions are more secure. I would like to see more Bitcoin based systems implement multisig, as I explained to the reporter.

Yesterday our portfolio company Coinbase announced that their Vault service supports multisig. I use a Coinbase Vault and like to think of it as my “savings account” and I use it in combination with my primary Coinbase wallet, which I like to think of as my “checking account.” In addition to a Multisig Vault being more secure, it also allows the user to store their own private keys, something that has not been possible with the Coinbase wallet service. You can increase the number of required signatures from two of three to three of five. The latter service is great for family vaults or group vaults.

The reaction to Multisig Vault has been very good, as evidenced by this Hacker News thread. I particularly liked this comment:

the mere fact that a major Bitcoin exchange is allowing users to hold their own private keys really puts a smile on my face today.

It is completely unheard of in the financial industry (and usually technically impossible before cryptocurrencies) to have a bank give away their “middle man” access of people’s money and empowering their customers with complete control over their finances.

This is what Bitcoin is all about. Giving us control back over our money and taking it away from the financial institutions. It is not a coincidence that Bitcoin was invented in the wake of the financial crisis in 2008. But you can’t take control back from the financial institutions without providing trust and security. And multisig is a big part of that.

by Fred Wilson at October 30, 2014 09:49 AM


Are shift-chains two-colorable?

For $A\subset [n]$ denote by $a_i$ the $i^{th}$ smallest element of $A$.

For two $k$-element sets, $A,B\subset [n]$, we say that $A\le B$ if $a_i\le b_i$ for every $i$.

A $k$-uniform hypergraph ${\mathcal H}\subset [n]$ is called a shift-chain if for any hyperedges, $A, B \in {\mathcal H}$, we have $A\le B$ or $B\le A$. (So a shift-chain has at most $k(n-k)+1$ hyperedges.)

We say that a hypergraph ${\mathcal H}$ is two-colorable (or that it has Property B) if we can color its vertices with two colors such that no hyperedge is monochromatic.

Is it true that shift-chains are two-colorable if $k$ is large enough?

Remarks. I first posted this problem on mathoverflow, but nobody commented on it.

The problem was investigated on the 1st Emlektabla Workshop for some partial results, see the booklet.

The question is motivated by decomposition of multiple coverings of the plane by translates of convex shapes, there are many open questions in this area. (For more, see my PhD thesis.)

For $k=2$ there is a trivial counterexample: (12),(13),(23).

A very magical counterexample was given for $k=3$ by Radoslav Fulek with a computer program:



If we allow the hypergraph to be the union of two shift-chains (with the same order), then there is a counterexample for any $k$.

Update. I have recently managed to show that more restricted version of shift-chains are two-colorable in this preprint.

by domotorp at October 30, 2014 09:45 AM


How does SARSA handle episode termination

When applied to domains that are episodic and have a "final" state but no final action, like a game, how does SARSA update the Q-values?

e.g. A game agent would receive this series:


Based on the traditional definition, which updates the current Q-value using the *future Q-value, shouldn't it be impossible to apply SARSA, since there's no future action to use when plugging in the (st, at, rt, st+1, at+1) values?

by Cerin at October 30, 2014 09:44 AM

Ambiguous Grammar to Unambiguous Grammar

I'm wondering if somebody can please help me to understand how I go about converting this grammar to an unambiguous grammar. If you can point me in the right direction, I would greatly appreciate it.

S -> AB | aaB

A -> a | Aa

B -> b

by Rusty at October 30, 2014 09:41 AM


How set default list value if system variable is not present in typesafe configuration?

Here is typesafe config documentatioin.

According to it it is possible to override a properties like that:

akka {
    loglevel = DEBUG
    loglevel = ${?LOG_LEVEL}

So in that case logLevel will be a DEBUG or value from LOG_LEVEL system variable.

What about list configuration properties?

akka {
    someListProperty = ["oneValue"]
    someListProperty = [${?LOG_LEVEL}] 

In that case if system variable is not present someListProperty will be overrided with empty list.

How can I set default list value if system variable is not present?

by Cherry at October 30, 2014 09:41 AM


Text book or distilled guide to market making?

Are there any practical articles, blogs or books that describe common practices in market making and how to calculate and use common measures?

The majority of the information I found are research papers. For example, the links in the following question: academic papers about market making

Are there any articles on how the research papers above are used in practice?

by Brandon at October 30, 2014 09:30 AM


How use context bounds with type constraints in Scala?

I have some function

def Bar[F :TypeTag ](fList: List[String]): (F) = {
typeOf[F] match {
  case t if t =:= typeOf[FooA] => returnsomething.asInstanceOf[F]
  case t if t =:= typeOf[FooB] => returnanother.asInstanceOf[F]

Then i want use type constraints those Bar take only Foo-child types. but i cant use this contstruction

def getFilter[F <:Foo:TypeTag ](fList: List[String]): (F) = {
typeOf[F] match {
  case t if t =:= typeOf[OpsosFilter] => OpsosFilter(loadFilter[String](fList)).asInstanceOf[F]
  case t if t =:= typeOf[OrganizationFilter] => OrganizationFilter(loadFilter[Long](fList)).asInstanceOf[F]

How i can resolve my problem?

by mechanikos at October 30, 2014 09:30 AM


Option Prices under the Heston Stochastic Volatility Model

I was wondering if anyone has come across a more straightforward derivation of the semi-closed form solution for the price of a european call under the Heston model than the one proposed by Heston (1993) ?

by dimebucker91 at October 30, 2014 09:20 AM


Using regex to access values from a map in keys

val m = Map("a"->2,"ab"->3,"c"->4)

scala> m.get("a");

scala> println(res.get)

scala> m.get(/a\.*/)
// or something similar.

Can i get a list of all key-value pairs where key contains "a" without having to iterate over the entire map , by doing something as simple as specifying a regex in the key value?

Thanks in advance!

by joanOfArc at October 30, 2014 09:08 AM

Recursive Reassignment of Variables in Clojure

I'm trying to get more acquainted with Clojre so I decided to do my Runge Kutta integrator project in it. However I'm having problems working with the immutable nature of the let statement. I want to evaluate 8 variables in each iteration of the loop and use them to recurse through it until my loop is finished.

The way I understand it, since my recur is inside the let's scope, my k's and l's won't be overwritten with each recursion. I'm looking for a more idiomatic way to recurse through my integrator.

(loop [psin 0  Xin 1 xn dx] ;initial values
  (if (> xn 1)
    (let [k1 (evalg  xn psin) ;define 4 variables to calculate next step of Xi, psi
          l1 (evalf  xn Xin)  ;evalf and evalg evaluate the functions as per Runge Kutta
          k2 (evalg (+ (* 0.5 dx) xn) (+ (* 0.5 l1) psin))
          l2 (evalf (+ (* 0.5 dx) xn) (+ (* 0.5 k1) Xin))
          k3 (evalg (+ (* 0.5 dx) xn) (+ (* 0.5 l2) psin))
          l3 (evalf (+ (* 0.5 dx) xn) (+ (* 0.5 k2) Xin))
          k4 (evalg (+ dx xn) (+ l3 psin))
          l4 (evalf (+ dx xn) (+ k3 Xin))]
        (let [Xinew (+ Xin (* (/ dx 6) (+ k1 k4 (* 2 k3) (* 2 k2))) )
              psinew (+ psin (* (/ dx 6) (+ l1 l4 (* 2 l2) (* 2 l3) )))]
          (println k1)
          (recur psinew Xinew (+ dx xn)))))))

Many thanks! Looking forward to getting more acquainted with clojure:)

by apmechev at October 30, 2014 09:02 AM

Generic Spray-Client

I'm trying to create a generic HTTP client in Scala using spray. Here is the class definition:

object HttpClient extends HttpClient

class HttpClient {

  implicit val system = ActorSystem("api-spray-client")
  import system.dispatcher
  val log = Logging(system, getClass)

  def httpSaveGeneric[T1:Marshaller,T2:Unmarshaller](uri: String, model: T1, username: String, password: String): Future[T2] = {
    val pipeline: HttpRequest => Future[T2] = logRequest(log) ~> sendReceive ~> logResponse(log) ~> unmarshal[T2]
    pipeline(Post(uri, model))

  val genericResult = httpSaveGeneric[Space,Either[Failure,Success]](
    "http://", Space("123", IdName("456", "parent"), "my name", "short_name", Updated("", 0)), "user", "password")


An object utils.AllJsonFormats has the following declaration. It contains all the model formats. The same class is used on the "other end" i.e. I also wrote the API and used the same formatters there with spray-can and spray-json.

object AllJsonFormats
  extends DefaultJsonProtocol with SprayJsonSupport with MetaMarshallers with MetaToResponseMarshallers with NullOptions {

Of course that object has definitions for the serialization of the models.api.Space, models.api.Failure and models.api.Success.

The Space type seems fine, i.e. when I tell the generic method that it will be receiving and returning a Space, no errors. But once I bring an Either into the method call, I get the following compiler error:

could not find implicit value for evidence parameter of type spray.httpx.unmarshalling.Unmarshaller[Either[models.api.Failure,models.api.Success]].

My expectation was that the either implicit in spray.json.DefaultJsonProtocol, i.e. in spray.json.StandardFormts, would have me covered.

The following is my HttpClient class trying it's best to be generic: Update: Clearer/Repeatable Code Sample

object TestHttpFormats
  extends DefaultJsonProtocol {

  // space formats
  implicit val idNameFormat = jsonFormat2(IdName)
  implicit val updatedByFormat = jsonFormat2(Updated)
  implicit val spaceFormat = jsonFormat17(Space)

  // either formats
  implicit val successFormat = jsonFormat1(Success)
  implicit val failureFormat = jsonFormat2(Failure)

object TestHttpClient
  extends SprayJsonSupport {

  import TestHttpFormats._
  import DefaultJsonProtocol.{eitherFormat => _, _ }

  val genericResult = HttpClient.httpSaveGeneric[Space,Either[Failure,Success]](
    "", Space("123", IdName("456", "parent"), "my name", "short_name", Updated("", 0)), "user", "password")

With the above, the problem still occurs where the unmarshaller is unresolved. Help would be greatly appreciated..


by atom.gregg at October 30, 2014 08:44 AM

Could we use the cookies with httpclient lib for Clojure

I found there is an example to get the web data with HttpKit following code

(http/get "")

(def options {:timeout 200             ; ms
              :basic-auth ["user" "pass"]
              :query-params {:param "value" :param2 ["value1" "value2"]}
              :user-agent "User-Agent-string"
              :headers {"X-Header" "Value"}})
(http/get "" options
          (fn [{:keys [status headers body error]}] ;; asynchronous response handling
            (if error
              (println "Failed, exception is " error)
              (println "Async HTTP GET: " status))))

However, is it possible to pass the cookies to it as well?

Regards Alex

by Alex Chan at October 30, 2014 08:42 AM


How to exploit calendar arbitrage?

Say we are looking at European Call options in a toy environment with zero deterministic intereset rates, a stock paying no dividends, no repo rates etc.

Let C(T,K) be the price of a call with expiry T and strike K.

If for T1<T2 C(T1,K) > C(T2,K) then this is calendar arbitrage.

Please explain how should one exploit this arbitrage opportunity.

Thank you.

by Alexander Chertov at October 30, 2014 08:33 AM

Beta vs. Implied Volatility statistical arbitrage using options

Let two underlyings, $S_{1}$ and $S_{2}$, are correlated and $\beta$ is the slope of their returns linear regression, that is, it says how much $S_{1}$ co-variates with $S_{2}$ variance.

For instance, let


that is, when $S_{2}$ raises by $1\%$ $S_{1}$ goes up by $0.83\%$; in this example we can assume to know the true value of $\beta$, then no estimation error is present.

Now consider two Call options: the former, $c_{1}$, is written on $S_{1}$ and the latter, $c_{2}$, is written on $S_{2}$, and they have both the same moneyness (e.g. 102%).

According to BMS formula, the implied volatility, $v_{1}$, extrapolated from $c_{1}$ is greater than the one, $v_{2}$, extrapolated from $c_{2}$.

For instance, let


on annual basis.

$S_{1}$ and $S_{2}$ are strongly correlated and their linear regression $R^{2}$ is above $0.8\approx0.9$.

What about buying $S_{2}$ Gamma selling $S_{1}$ Gamma in order to get a zero cost position but having sold (bought) an implied volatility greater (smaller) than realized volatility?

by Lisa Ann at October 30, 2014 08:31 AM


cannot add a jar to scala repl with the :cp command

If I issue:

$ scala -classpath poi-3.9/poi-3.9-20121203.jar 

scala> import org.apache.poi.hssf.usermodel.HSSFSheet
import org.apache.poi.hssf.usermodel.HSSFSheet

Everything works ok, but if instead I issue:

$ scala

scala> :cp poi-3.9/poi-3.9-20121203.jar
Added '/home/sas/tmp/poi/poi-3.9/poi-3.9-20121203.jar'.  Your new classpath is:
Nothing to replay.

scala> import org.apache.poi.hssf.usermodel.HSSFSheet
<console>:7: error: object apache is not a member of package org
       import org.apache.poi.hssf.usermodel.HSSFSheet

Am I missing something?

by opensas at October 30, 2014 08:20 AM


Sea Lion Shirts: FINAL DAY

go away sea lions

It’s the last day to get “Go Away, Sea Lion” shirts! The order period closes at 8pm Pacific time on Thursday.

If you’ve grabbed one already, thanks so much!! If you haven’t, no worries, this is the last I’ll mention it.


by David Malki at October 30, 2014 08:19 AM


Are conditionals necessary in computation? [duplicate]

I know this question might seem weird, maybe I'm just overthinking, but this is really troubling me because I've been a computer engineer for some time now and conditionals (if statements for instance) always seemed pretty clear and intuitive!

I mean, let's assume that I have a program which starts at

int main(int argc, char** argv) // in C

How am I supposed to know what is inside argv and argc? My old, intuitive answer: if statements. Using if statements, I would decide based on the input how to execute my program... I never considered any possibility other than having the input provided under this way, and I, as a programmer, could only mess with what is after the call to the program, with the input already defined. Hence I had no doubt that I needed to use conditionals.

I have ONLY 1 PROGRAM that needs to execute for ANY input and provide the right answers, hence I can't avoid if statements. It all seemed alright, for years now... until I started to think... Could there be other ways?

What if I had a different version of my program for each possible input? The operating system or the hardware could then call a different version according to the input. Then I thought – this would simply delegate the if statements to either the operating system or the hardware, respectively.

Then I even thought, what if the user had a different computer for each different input? I know this might seem really stupid, but I'm just trying to understand the "law" behind the need for conditionals in our physical world!

Also, in this last example it could be argued that the conditional would simply have to be executed in the user's brain (when he decides which computer to use based on the intended input).

Can someone give me some light in this subject? This is really troubling me :( maybe I've overthought things and now I'm paying the price for it...

by Devian Dover at October 30, 2014 08:19 AM


How to create a custom package task to jar a subset of classes in SBT

I am trying to define a separate package task without modifying the original task in compile configuration. This new task will package only a subset of classes conforming an API which we need to be able to share with other teams so they can write plugins for our application. So the end result will be two jars, one with the full application and a second one with a subset of the classes.

I approached this problem by creating a different configuration which I called pluginApi and would redefine the packageBin task within this new configuration so it does not change the original definition of packageBin. This idea was taken from here:

How to create custom "package" task to jar up only specific package in SBT?

In my build.stb I have:

lazy val PluginApi = config("pluginApi") extend(Compile) describedAs("Custom plugin api configuration")

lazy val root = project in file(".") overrideConfigs (PluginApi)

This effectively creates my new configuration and I can call

sbt pluginApi:packageBin

Which generates the complete jar in the same way as compile:packageBin would do. I then try to modify the mappings in the new packageBin task with:

mappings in (PluginApi, packageBin) ~= { (ms: Seq[(File, String)]) =>
  ms filter { case (file, toPath) =>

but this has no effect. I think the reason is because the call to pluginApi:packageBin is delegated to compile:packageBin rather than it being a cloned task.

I can redefine a new packageBin within the new scope like:

packageBin in PluginApi := {


However I would have to rewrite all packageBin functionality instead of reusing existing code. Also, in case that rewriting is unavoidable I am not sure how that implementation would be.

Could somebody provide an example about how to achieve this?

by Miquel at October 30, 2014 08:17 AM

Ordering of the messages

  1. Does ZeroMQ guarantees the order of the messages( FIFO ).
  2. Is there an option for persistence.
  3. Is it the best fit for IPC communications.
  4. Does it allow prioritizing the messages.
  5. Does it allow prioritizing the receivers.
  6. Does it allows both synchronous as well asynchronous way of communication ?

by PhiberOptixz at October 30, 2014 08:14 AM




scala out of memory in ubuntu 14.04

I have installed the Scala through apt-get install on the Ubuntu 14.04; when I use the scalac app.scala to compile the source file, it ok. But when i use the fsc app.scala to compile, it will run a very long time, and finally out of memory.

My source code is simple, bellow:

object FallWinterSpringSummer extends App {
  for (season <- List("fall", "winter", "spring"))
    println(season +": ")

at same time when I use the scalac to compile bellow code, it also out of memory:

println("Hello, world, from a script!")

any help will be regard. thanks.

by daixfnwpu at October 30, 2014 07:26 AM

Planet Clojure

verbs, nouns and file watch semantics

I've recently had a fascination with file watchers semantics in clojure libraries. Having trialed bunch of them in the past, I decided that it was time to have a go at one myself and wanted to share some of my thoughts:

Typically file watchers are implemented using either one of two patterns:

  1. verb based - (add-watch directory callback options)
  2. noun based - (start (watcher callback options))

The difference is very subtle and really centers around the verb start. If the verb start does not exist, we can treat the file-watch as an action on the file object. However if it does exist, we can treat the file-watch as an object or these days, a stuartsierra/componentisable thing. My preference is for the verb style, though it really depends on how the functionality fits within a bigger application. Currently, most bigger applications revolve around the component/dependency injection pattern so it makes sense to have something be componentisable as well.

A survey of existing clojure file-watch libraries and their semantics yield the following results:

java-watch (verb based)

(use [' :only [register-watch]])
(register-watch "/some/path/directory/here" [:modify] #(println "hello event " %))

dirwatch (verb based)

(require '[juxt.dirwatch :refer (watch-dir)])
(watch-dir println ( "/tmp"))

panoptic (noun based)

(use 'panoptic.core)
(def w (-> (file-watcher :checksum :crc32)
           (on-file-modify #(println (:path %3) "changed"))
           (on-file-create #(println (:path %3) "created"))
           (on-file-delete #(println (:path %3) "deleted"))))
(run-blocking! w ["error.log" "access.log"])

clojure-watch (verb based)

(use 'clojure-watch.core)
(start-watch [{:path "/home/derek/Desktop"
               :event-types [:create :modify :delete]
               :bootstrap (fn [path] (println "Starting to watch " path))
               :callback (fn [event filename] (println event filename))
               :options {:recursive true}}])

ojo (noun based)

(defwatch watcher
  ["../my/dir/" [["*goodstring.dat" #"^\S+$"]]] [:create :modify]
  {:parallel parallel?
   :worker-poll-ms 1000
   :worker-count 10
   :extensions [throttle track-appends]
   :settings {:throttle-period (config/value :watcher :throttle-period)}}
  (let [[{:keys [file kind appended-only? bit-position] :as evt}]
    (reset! result
            (format "%s%s"
                    (slurp file)
                    (if appended-only? "(append-only)" "")))))

watchtower (noun based with implicit start)

(watcher ["src/" "resources/"]
  (rate 50) ;; poll every 50ms
  (file-filter ignore-dotfiles) ;; add a filter for the files we care about
  (file-filter (extensions :clj :cljs)) ;; filter by extensions
  (on-change #(println "files changed: " %)))

filevents (verb based)

(watch (fn [kind file]
         (println kind)
         (println file))
       "foo" "bar/")

Out of all the watchers, ojo is seriously cool. I only properly looked at it after finishing my own file watcher and that would be my recommendation if anyone wants an industrial strength watcher.

another file watcher?

Yep. Though it was more of an exercise in design than anything performance based. I'm on a Mac and I chose to wrap the java.nio.file.WatchService api already done to death by many of the file-watch libraries before me. I'm hoping that in a year or two's time, they can replace the poll-based approach with something quicker. The lag-time for events is devastatingly slow. I often found myself starting up the file watcher, creating a new file for a test then having to wait... and wait... I twiddle my thumbs for a bit, sometimes going for a cup of tea. On coming back, I find that the :create file event had successfully registered. Granted, my laptop is a bit old, but I'm extremely disappointed with the WatchService performance.

watch the concept

clojure already has an add-watch function based around refs and atoms. There's a pattern that already exists:

(add-watch object :key (fn [object key previous next]))

However, add-watch is really quite a generic concept and it could be applied to all sorts of situations. Also, watching something usually comes with a condition. We usually don't react on every change that comes to us in our lives. We only react when a certain condition comes about. For example, in our everyday lives, I get told all the time to:

"Watch the noodles on the stove and IF it starts
  boiling over, add some cold water to the pot"

So this then becomes a much more generic concept:

(add-watch object :key (fn [object key previous next]) conditions)

In Orwell's 1984, there is a concept of Newspeak where by the vocabulary used by the populous become increasingly controlled by the party such that most ideas can be conveyed using a very limited subset of the language. In this way, individual thought and expression becomes non-existent and thus allows the party greater control over the population as well as provides a source of unity. In our society, newspeak is more subtle, though it influences through collective mindshare. Most corporate jargon can also be considered a form of newspeak.

I tend to get conflicted when I program because the key to having greater control is in the limitation of language. So culling words and combining two concepts into one word is very powerful as a strategy for control, though it may not be such a great thing for humanity in general.

Anyways... the concept of adding options to watch was implemented as a protocol and realised in

(require '[ :as watch])
(let [subject  (atom {:a 1 :b 2})
      observer (atom nil)]
  (watch/add subject :clone
             (fn [_ _ p n] (reset! observer n))

             ;; Options
             {:select :b   ;; we will only look at :b
              :diff true   ;; we will only trigger if :b changes

  (swap! subject assoc :a 0) ;; change in :a does not
  @observer => nil           ;; affect watch

  (swap! subject assoc :b 1) ;; change in :b does
  @observer => 1))

So the watch/add, watch/list and watch/remove implementations extends the functionality of existing atoms and refs as well as allow more semantics so that other data-structures can also take advantage of the same shape in semantics. watch/add follows the same structural semantics as add-watch. They are then implemented as protocols around the object in

(require '[])
(require '[ :as watch])

(def ^:dynamic *happy* (promise))

;; We add a watch  
(watch/add (io/file ".") :save
           (fn [f k _ [cmd file]]

             ;; One-shot strategy where we remove the 
             ;; watch after a single event
             (watch/remove f k)
             (.delete file)
             (deliver *happy* [cmd (.getName file)]))

           ;; Options
           {:types #{:create :modify} 
            :recursive false
            :filter  [".hara"]
            :exclude [".git" "target"]
            :async false})

;; We can look at the watches on the current directory
(watch/list (io/file "."))
=> {:save function<>}

;; Create a file to see if the watch triggers
(spit "happy.hara" "hello")

;; It does!
=> [:create "happy.hara"]

;; We see that the one-shot watch has worked
(watch/list (io/file "."))
=> {}

but what about components?

It was actually very easy to build using the idea of something that is startable and stoppable. watcher, start-watcher and stop-watcher all follow the conventions and so it becomes easy to wrap the component model around the three methods:

(require '[hara.component :as component])
(require '[ :refer :all])

(extend-protocol component/IComponent
  (component/-start [watcher]
    (println "Starting Watcher")
    (start-watcher watcher))

  (component/-stop [watcher]
    (println "Stopping Watcher")
    (stop-watcher watcher)))

  (watcher ["."] println
           {:types #{:create :modify}
            :recursive false
            :filter  [".clj"]
            :exclude [".git"]
            :async false}))

but I want stuartsierra/components!

Okay, okay... I get it.

(require '[com.stuartsierra.component :as component])
(require '[ :refer :all])

(extend-protocol component/Lifecycle
  (component/start [watcher]
    (println "Starting Watcher")
    (start-watcher watcher))

  (component/stop [watcher]
    (println "Stopping Watcher")
    (stop-watcher watcher)))

  (watcher ["."] println
          {:types #{:create :modify}
           :recursive false
           :filter  [".clj"]
           :exclude [".git"]
           :async false}))

by Chris Zheng at October 30, 2014 07:10 AM


Ansible error due to GMP package version on Centos6

I have a Dockerfile that builds an image based on CentOS (tag: centos6):

FROM centos

RUN rpm -iUvh
RUN yum update -y
RUN yum install ansible -y

ADD ./ansible /home/root/ansible

RUN cd /home/root/ansible;ansible-playbook -v -i hosts site.yml

Everything works fine until Docker hits the last line, then I get the following errors:

 [WARNING]: The version of gmp you have installed has a known issue regarding
timing vulnerabilities when used with pycrypto. If possible, you should update
it (ie. yum update gmp).

PLAY [all] ******************************************************************** 

GATHERING FACTS *************************************************************** 
Traceback (most recent call last):
  File "/usr/bin/ansible-playbook", line 317, in <module>
  File "/usr/bin/ansible-playbook", line 257, in main
  File "/usr/lib/python2.6/site-packages/ansible/playbook/", line 319, in run
    if not self._run_play(play):
  File "/usr/lib/python2.6/site-packages/ansible/playbook/", line 620, in _run_play
  File "/usr/lib/python2.6/site-packages/ansible/playbook/", line 565, in _do_setup_step
  File "/usr/lib/python2.6/site-packages/ansible/runner/", line 204, in __init__
    cmd = subprocess.Popen(['ssh','-o','ControlPersist'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
  File "/usr/lib64/python2.6/", line 642, in __init__
    errread, errwrite)
  File "/usr/lib64/python2.6/", line 1234, in _execute_child
    raise child_exception
OSError: [Errno 2] No such file or directory

Stderr from the command:

    package epel-release-6-8.noarch is already installed

I imagine that the cause of the error is the gmp package not being up to date. There is a related issue on GitHub:

But there doesn't seem to be any solutions at the moment ... Any ideas ? Thanks in advance !

My site.yml playbook:

- hosts: all

    - shell: echo 'hello'

by m_vdbeek at October 30, 2014 06:52 AM



Which shell should I learn for FreeBSD and Debian?

I have read many articles that say the official shell in FreeBSD is tcsh, and in Debian, its dash. However, when I either echo $shell (FreeBSD) or env in Debian, it says that the shell is csh in the first case, or bash in Debian.

This is in fresh installations. I tested under the root account and a normal user account. Also, when I create a new account in FreeBSD, I have the choice of 3 shells; sh, csh or tcsh. If I make no choice, csh is the default for the account.

What is the official, standard shell under those systems? Is it tcsh or csh in FreeBSD? Is it dash or bash in Debian?

Is it me that does something wrong? Or are the articles and sites misinformed?

I want to learn two shells: one for Debian and one for FreeBSD, but I can't find which one I should learn, since its not clear which one is the official shell.

Also, while searching for which shell I should learn, I found this: Is it someone that just doesn't like csh, or should we really avoid csh?

I'm using the latest Debian and FreeBSD versions.

I start to get lost with all these shell options, they all look the same to me (except for the syntax; I mean they all look to offer the same possibilities). It's why i want to learn the official one.

by user1115057 at October 30, 2014 06:28 AM


Scala idiom for partial models?

I am writing a HTTP REST API and I want strongly typed model classes in Scala e.g. if I have a car model Car, I want to create the following RESTful /car API:

1) For POSTs (create a new car):

case class Car(manufacturer: String, 
               name: String, 
               year: Int)

2) For PUTs (edit existing car) and GETs, I want tag along an id too:

case class Car(id: Long, 
               manufacturer: String, 
               name: String, 
               year: Int)

3) For PATCHes (partial edit existing car), I want this partial object:

case class Car(id: Long, 
               manufacturer: Option[String],
               name: Option[String], 
               year: Option[Int])

But keeping 3 models for essentially the same thing is redundant and error prone (e.g. if I edit one model, I have to remember to edit the other models).

Is there a typesafe way to maintain all 3 models? I am okay with answers that use macros too.

I did manage to combine the first two ones as following

trait Id {
  val id: Long

type PersistedCar = Car with Id 

by wrick at October 30, 2014 06:21 AM

Is there a standard way to represent conversion from one object to another in Scala?

I made a trait in Scala.

trait Convert {
  def to[T]: T

Then in my class I have:

case class SourceCode(code: String) {
  override def to[IndentedCode]: IndentCode = { ... }

Here I am representing a way to convert one object into another. When I want to convert I use as such:[IndentedCode].toString

Is there a standard way to represent conversion from one type to another? Would people use implicits as they could do automatic type conversion, but then how would you chain them together to go from one to another?

by Phil at October 30, 2014 06:08 AM

text based adventure game in scala

I have to make a text based adventure game where the player has to collect four objects in order and make it back to room 10 to build it. The player only has 20 moves to collect the objects but as soon as they have all four they only have 5 moves.

I am confused as how to keep track of the objects collected and how to track the moves as well.

This is what I have coded already:

* getRequest:
*    Prompts the user for a move 
*    and returns the string entered
def getRequest(): String = {
  println("Where do you go now?")

* printHelp:
*    help menu with basic instructions
def printHelp() {
 println("N=North, E=East, S=South and W=West")

*  processSpecialCommand:
*    Processes the special commands: H, Q (only)
def processSpecialCommand(req: String) {
 if (req == "H")
else if (req == "Q") {
  println("I can't believe you are giving up. Are you afraid of Perry?")
  println("Oh well, maybe another day then.")
  sys.exit(1)   // This quits the program immediately (aka- abort)
} else {
  println("What are you saying?")
  println("Use 'H' for help.")

/*** Room 1: Foyer (E=6, S=2, W=3) ***/
def room1() {
 // Print the message
 println("Room 1")
 println("  Ah, the foyer.  I probably should call it a lobby")
 println("  but foyer just sounds so much better.  Don't you agree?")
 println("There are doors to the East, South, and West")

// Get and process the request (moving on to the next state/room)
val move = getRequest.toUpperCase
move match {
  case "N" => 
     println("You cannot go there.")
     return room1()  // Go back to room 1
  case "E" =>
     // Go to room 6
     return room6()  
  case "S" =>
     // Go to room 2
     return room2()
  case "W" =>
     // Go to room 3
     return room3()
  case cmd =>
     // Maybe it is a special request (Help or Quit)
     return room1()  // Go back to room 1

/*** Room 2: (N=1, W=4, S=7) ***/
def room2() {
  // Print the message
  println("Room 2")
  println("There are doors to the North, South, and West")

// Get and process the request (moving on to the next state/room)
val move = getRequest.toUpperCase
move match {
  case "N" => 
     // Go to room 1
     return room1()  // Go to room 1
  case "E" =>
     println("You cannot go there.")
     return room2()  // Go back to room 2
  case "S" =>
     // Go to room 7
     return room7()
  case "W" =>
     // Go to room 4
     return room4()
  case cmd =>
     // Maybe it is a special request (Help or Quit)
     return room2()  // Go back to room 2

/*** Room 3: (E=1, S=4) ***/
def room3() {
  // Print the message
  println("Room 3")
  println("You found piece number 4!!!")
  println("There are doors to the East and South")

//if you have pieces 1,2 and 3 you can collect this piece else this part cannot be collected yet

// Get and process the request (moving on to the next state/room)
val move = getRequest.toUpperCase
move match {
  case "N" => 
     println("You cannot go there.")
     return room3()  // Go back to room 3
  case "E" =>
     // Go to room 1
     return room1()
  case "S" =>
     // Go to room 4
     return room4()
  case "W" =>
     println("You cannot go there.")
     return room3()  // Go back to room 3
  case cmd =>
     // Maybe it is a special request (Help or Quit)
     return room3()  // Go back to room 3

/*** Room 4: (N=3, E=2) ***/
def room4() {
 // Print the message
  println("Room 4")
  println("You found piece number 2!!!")
  println("There are doors to the North and East")

 //if you have piece number 1 you can collect this piece else this part cannot be collected yet

 // Get and process the request (moving on to the next state/room)
 val move = getRequest.toUpperCase
 move match {
  case "N" => 
     // Go to room 3
     return room3()
  case "E" =>
     // Go to room 2
     return room2()
  case "S" =>
     println("You cannot go there.")
     return room4()  // Go back to room 4
  case "W" =>
     println("You cannot go there.")
     return room4()  // Go back to room 4
  case cmd =>
     // Maybe it is a special request (Help or Quit)
     return room4()  // Go back to room 4

/*** Room 5: (N=6, S=8) ***/
def room5() {
// Print the message
println("Room 5")
println("You found piece number3!!!")
println("There are doors to the North and South")

//if you have pieces 1 and 2 you can collect this piece else this part cannot be collected yet

// Get and process the request (moving on to the next state/room)
val move = getRequest.toUpperCase
move match {
  case "N" => 
     // Go to room 6
     return room6()
  case "E" =>
     println("You cannot go there.")
     return room5()
  case "S" =>
     // Go to room 8
     return room8()  
  case "W" =>
     println("You cannot go there.")
     return room5()  // Go back to room 5
  case cmd =>
     // Maybe it is a special request (Help or Quit)
     return room5()  // Go back to room 5

/*** Room 6: (E=9, S=5, W=1) ***/
def room6() {
// Print the message
println("Room 6")
println("There are doors to the East, South and West")

// Get and process the request (moving on to the next state/room)
val move = getRequest.toUpperCase
move match {
  case "N" => 
     println("You cannot go there.")
     return room6()
  case "E" =>
     // Go to room 9
     return room9()
  case "S" =>
     // Go to room 5
     return room5()  
  case "W" =>
     //Go to room 1
     return room1() 
  case cmd =>
     // Maybe it is a special request (Help or Quit)
     return room6()  // Go back to room 6

/*** Room 7: (N=2, E=8) ***/
def room7() {
// Print the message
println("Room 7")
println("There are doors to the North and East")

// Get and process the request (moving on to the next state/room)
val move = getRequest.toUpperCase
move match {
  case "N" => 
     // Go to room 2
     return room2()
  case "E" =>
     // Go to room 8
     return room8()
  case "S" =>
     println("You cannont go there.")
     return room7()  
  case "W" =>
     println("You cannont go there.")
     return room7() 
  case cmd =>
     // Maybe it is a special request (Help or Quit)
     return room7()  // Go back to room 7

/*** Room 8: (N=5, E=10, W=7) ***/
def room8() {
// Print the message
println("Room 8")
println("There are doors to the North, East and West")

// Get and process the request (moving on to the next state/room)
val move = getRequest.toUpperCase
move match {
  case "N" => 
     // Go to room 5
     return room5()
  case "E" =>
     // Go to room 10
     return room10()
  case "S" =>
     println("You cannont go there.")
     return room8()  
  case "W" =>
     // Go to room 7
     return room7() 
  case cmd =>
     // Maybe it is a special request (Help or Quit)
     return room8()  // Go back to room 8

 /*** Room 9: (S=10, W=6) ***/
 def room9() {
 // Print the message
 println("Room 9")
 println("You found piece number 1!!!")
 println("There are doors to the South and West")

 //collect the first piece

 // Get and process the request (moving on to the next state/room)
 val move = getRequest.toUpperCase
 move match {
  case "N" => 
     println("You cannot go there.")
     return room9()
  case "E" =>
     println("You cannot go there.")
     return room9()
  case "S" =>
     // Go to room 10
     return room10()  
  case "W" =>
     // Go to room 6
     return room6() 
  case cmd =>
     // Maybe it is a special request (Help or Quit)
     return room9()  // Go back to room 9 

/*** Room 10: (N=9, W=8) ***/
def room10() {
// Print the message
println("Room 10")
println("There are doors to the North and West")

// Get and process the request (moving on to the next state/room)
val move = getRequest.toUpperCase
move match {
  case "N" => 
     // Go to room 9
     return room9()
  case "E" =>
     println("You cannot go there.")
     return room10()
  case "S" =>
     println("You cannot go there.")
     return room10()
  case "W" =>
     // Go to room 8
     return room8() 
  case cmd =>
     // Maybe it is a special request (Help or Quit)
     return room10()  // Go back to room 10  

by acolisto at October 30, 2014 05:46 AM



Call by name vs call by value in Scala, clarification needed

As i understand it, in Scala, a function may be called either

  • by-value or
  • by-name

For example, given the following declarations, do we know how the function will be called?


def  f (x:Int, y:Int) = x;


f (1,2)
f (23+55,5)
f (12+3, 44*11)

What are the rules please?

by Jam at October 30, 2014 05:42 AM


NAT and source IP filtering in PF, using OpenBSD >= 4.7

I just read a book about PF (The Book Of PF, No Starch), but there's one question not answered by it.

If I have a gateway machine using two interfaces, $int_if and $ext_if, and I NAT the packages coming from $int_if:net (which is, let's say, to $ext_if using match, when gets the NAT applied? Before or after the filtering rules?


match out on $ext_if from nat-to ($ext_if)
pass out on $ext_if from
block drop out on $ext_if from

Does that work? Or gets the source IP of a packet coming from NATed to the address of $ext_if before the check if it's from gets evaluated?

This diagram is not helpful to answer this question, I think, but it's interesting nevertheless: []

If you read the PF NAT FAQ [], especially the section "Configuring NAT", you'll come across this sentences:

When a packet is selected by a match rule, parameters (e.g. nat-to) in that rule are remembered and are applied to the packet when a pass rule matching the packet is reached. This permits a whole class of packets to be handled by a single match rule and then specific decisions on whether to allow the traffic can be made with block and pass rules.

I think that sounds as if it's not as I stated in the paragraph above, so the source IP gets "remembered" until there's a decision about the action to be done with the packet. If the decision is made, the NATting gets applied.

What do you think?

P.S.: This is a quite theoretic question. If you're a little bit pragmatic, you'll do it this way:

match out on $ext_if from nat-to ($ext_if)
block drop from
# or, explicitly,
# block drop in on $int_if from

So the block rule gets already applied when the packet comes in on $int_if.

EDIT: Another possibility is, of course, to decide before NAT:

pass from
block drop from
match out on $ext_if from nat-to ($ext_if)

If a packet from .23 arrives, it first matches the first rule, then matches the second rule and the third "rule". But as the second rule is the last deciding about passing/blocking, the packet gets blocked. Right?

by dermesser at October 30, 2014 05:42 AM



Scala illegal Inheritance error doesn't make sense

I am writing a program in Java and Scala (using interop) and it keeps giving me this compile error wich doesn't make sense....

Description Resource Path Location Type illegal inheritance; inherits different type instances of trait IEvaluationFunction: core.interfaces.IEvaluationFunction[core.representation.ILinearRepresentation[Double]] and core.interfaces.IEvaluationFunction[core.representation.ILinearRepresentation[Double]] IConstructors.scala /ScalaMixins - Parjecoliv1/src/aop line 36 Scala Problem

It says it inherits different types instances, but they are the same. They're both:


Can somebody help me solving or understanding this?

The code:

This is where it gives error. The code is in Scala.

def createFermentationEvaluation(fermentationProcess:FermProcess,
     interpolationInterval:Int):FermentationEvaluation = {  
  return new FermentationEvaluation(fermentationProcess,interpolationInterval)
    with EvaluationFunctionAspect[ILinearRepresentation[Double]]

Here are the interface and classes that it uses:

public class FermentationEvaluation
  extends EvaluationFunction<ILinearRepresentation<Double>>

trait EvaluationFunctionAspect[T <:IRepresentation]
  extends IEvaluationFunction[T] {...}

public abstract class EvaluationFunction<T extends IRepresentation>
  implements IEvaluationFunction<T>, {...}

public interface IRepresentation {...}

public interface ILinearRepresentation<E> extends IRepresentation {...}

I didn't include the body of any since it seems to be an inheritance problem.

by André Rodrigues at October 30, 2014 05:11 AM




How do I change intellij idea to compile with scala 2.11?

I am using Intellij Idea 13.1.4. I have a scala sbt project. It is currently being compiled with Scala 2.10. I would like to change this to Scala 2.11. In my build.sbt, I have:

libraryDependencies ++= Seq(
  "org.scala-lang" % "scala-compiler" % "2.11.0",

When I build my project, it still builds as a Scala 2.10 project.

Also, under my Project Settings->Modules->Scala->Facet 'Scala'->Compiler library, Intellij still shows scala-compiler-bundle:2.10.2. There is no option for a 2.11.x bundle. How would I get an option for Scala 2.11.x?


by Di Zou at October 30, 2014 04:36 AM


How to calculate the Sharpe ratio for market neutral strategies?

Suppose I am long one stock and short an index in a ratio effectively making market beta as zero and I close the position with some positive P&L.

How should I calculate the return for the portfolio above? How do I effectively calculate the Sharpe ratio for above long-short and market neutral portfolios?

by surya kiran at October 30, 2014 04:34 AM



Lambda Expressions vs Procedural-styled Functions

I just don't understand the power of the lambda expression.


def sum(x, y):
    return x + y


(lambda (x y) (+ x y))

Why is one so different from the other besides the lambda expression not being given a formal name? It seems like you can do the same things with either method, so why is my teacher so high on the lambda calculus?

by duskamo at October 30, 2014 04:14 AM


does the diagonalization of real numbers infer the pigeonhole principle?

just a quick thought I had. The diagonalization proof goes by constructing a real number x that is guaranteed to not be paired with an integer in the countable set. Does this infer (employ) the pigeonhole principle to conclude that the set of real numbers is uncountable? is pigeonhole even an important part of these types of proofs, or is something else going on?

submitted by FearMonstro
[link] [7 comments]

October 30, 2014 04:14 AM

Portland Pattern Repository


How to use sequence from scalaz to transform T[G[A]] to G[T[A]]

I have this code to transform List[Future[Int]] to Future[List[Int]] by using scalaz sequence.

import scalaz.concurrent.Future

val t = List(,, //List[Future[Int]]
val r = t.sequence //Future[List[Int]]

because I am using Future from scalaz, so it may have implicit resolution to do the magic for me, I just wonder if the type class is custom class not predefined one like Future, how can I define implicit resolution to achieve the same result

case class Foo(x: Int)

val t = List(Foo(1), Foo(2), Foo(3)) //List[Foo[Int]]

val r = t.sequence //Foo[List[Int]]

Many thanks in advance

by Cloud tech at October 30, 2014 03:57 AM

Clojure: Convert hash-maps key strings to keywords?

I'm pulling data from Redis using Aleph:

(apply hash-map @(@r [:hgetall (key-medication id)]))

The problem is this data comes back with strings for keys, for ex:

({"name" "Tylenol", "how" "instructions"})

When I need it to be:

({:name "Tylenol", :how "instructions})

I was previously creating a new map via:

{ :name (m "name"), :how (m "how")}

But this is inefficient for a large amount of keys.

If there a function that does this? Or do I have to loop through each?

by dMix at October 30, 2014 03:49 AM

Planet Emacsen

sachachua: Emacs hangout notes

Prompted by Michael Fogleman’s tweet that he’d like to see a bunch of us Emacs geeks get together in one room for a hackathon, Nic Ferrier and I tried out a casual Emacs hangout. Tinychat didn’t work, but Google Hangouts worked fine. A bunch of people saw our tweets about it too and dropped by, yay! Here are some things we talked about (mostly nifty tweaks from Nic):

  • shadchen is great for pattern matching, especially within trees
  • Alec wanted to know about Emacs and Git, so Nic demonstrated basic Magit
  • after-init-hook – load things there instead of in your ~/.emacs.d/init.el, so that your init.el does not break and you can test things easily from within Emacs
  • I shared isearch-describe-bindings, which had a number of goodies that I hadn’t known about before
  • Recognizing the opportunity to share what you’re working on (ex: nicferrier’s working on an Emacs Lisp to Javascript compiler)

Google Hangouts screensharing worked well for us, giving multiple people the opportunity to share their screen and allowing people to choose what they wanted to focus on. Nic also started up a tmux session and a repository of public keys, but that’s a bit more involved and requires more trust/coordination, so screen-sharing will likely be the way to go unless people have more of a pairing thing set up. This kind of informal hangout might be a good way for people to share what they’re working on just in case other people want to drop by and help out or ask questions (which people can optionally answer, or postpone if they want to stay focused on their work). Something a little more focused than this might be to pick one bug or task and work on it together, maybe starting with a “ridealong” (one person screenshares, thinking out loud as he or she works, and taking the occasional question) and moving towards full pairing (people working on things together). Some of my short-term Emacs goals are:

  • Improve my web development workflow and environment (including getting the hang of Magit, Smart Parens, Skewer, AutoComplete / Company Mode, and other good things)
  • Learn how to write proper tests for Emacs-related things
  • Get back into contributing to the Emacs community, perhaps starting to work on code/tests
  • Look up my Org agenda on my phone, probably with Org Mobile or some kind of batch process

Let’s give this a try. =) I set up a public calendar and added an event on Nov 5, 9-11PM Toronto time. If folks want to drop by, we’ll see how that works out!

The post Emacs hangout notes appeared first on sacha chua :: living an awesome life.

by Sacha Chua at October 30, 2014 03:44 AM



Is investing my time into Swift a good idea? Will it have any practical uses outside of Apple's technology in the future?

Don't get me wrong, I like the language and It's definitely eased the ability to code for Apple products, but I'm hoping this language will have more to it than making Apple more of a unique snowflake than it already is.

submitted by CortezVee
[link] [4 comments]

October 30, 2014 03:39 AM

Wes Felter


Eliminate a left recursion

Hello there, ive been thinking on how to eliminate this left recursion, but didnt come up with anything, its for developing a parser in javacc, can any of you guys help me? Thanks a lot

void Expression(): {} { (Expression() (<OPERADORES>) Expression() | Expression() <ACOLCHETES> Expression() <FCOLCHETES> | Expression() <PONTO> "length" | Expression() <PONTO> Identifier() <APAR> (Expression() (<VIRGULA> Expression())*)? <FPAR> | <INTEIRO> | "true" | "false" | Identifier() | "this" | "new" "int" <ACOLCHETES> Expression() <FCOLCHETES> | "new" Identifier() <APAR> <FPAR> | "!" Expression() | <APAR> Expression() <FPAR>) }

submitted by rbazzo
[link] [4 comments]

October 30, 2014 03:32 AM


Most memorable CS paper titles

Following a fruitful question in MO, I thought it would be worthwhile to discuss some notable paper names in CS.

It is quite clear that most of us might be attracted to read (or at least glance at) a paper with an interesting title (at least I do so every time I go over a list of papers in a conference), or avoid reading poorly named articles.

Which papers do you remember because of their titles (and, not-necessarily, the contents)?

My favorite, while not a proper TCS paper, is "The relational model is dead, SQL is dead, and I don’t feel so good myself." .

by R B at October 30, 2014 02:44 AM


Any suggestions for really interesting peer reviewed papers on computer science?

I'm a CS major and need to write a research paper based on three peer reviewed papers of the same topic. Has anyone come across any recent (required) peer-reviewed papers that they really enjoyed reading?

My fall back plan is to use some published work from my favorite professor, but before doing that I would love to hear about any interesting papers currently circulating around the field.

submitted by PastyPilgrim
[link] [5 comments]

October 30, 2014 02:22 AM

Wes Felter


How did punch card systems work? Professor Brailsford delves further into ...

How did punch card systems work? Professor Brailsford delves further into the era of mainframe computing with this hands-on look at punch cards.


by IbnFirnas at October 30, 2014 02:11 AM


How to collect data from redis with fluentd or maybe how to develop my plugin in fluentd

Fluentd collected data from nginx log file before. Now I have put nginx access_log into redis with my new module. I want to collect data from redis with fluentd and send the data to a fluentd server. So how to collect data from redis? Some one ever do this?

by gougou at October 30, 2014 02:11 AM

arXiv Logic in Computer Science

Equational properties of saturated least fixed points. (arXiv:1410.8111v1 [cs.LO])

Recently, a new fixed point operation has been introduced over certain functions between saturated complete lattices and used to give semantics to logic programs with negation and boolean context-free grammars. We prove that this new operation satisfies the `standard' identities of fixed point operations as described by the axioms of iteration theories. We also study this new fixed point operation in connection with lambda-abstraction.

by <a href="">Zoltan Esik</a> at October 30, 2014 01:30 AM

Design of Binary Quantizers for Distributed Detection under Secrecy Constraints. (arXiv:1410.8100v1 [cs.IT])

In this paper, we consider the problem of designing binary quantizers at the sensors for a distributed detection network in the presence of an eavesdropper. We propose to design these quantizers under a secrecy constraint imposed on the eavesdropper. The performance metric chosen in this paper is the KL Divergence at both the fusion center (FC) and the eavesdropper (Eve). First, we consider the problem of secure distributed detection in the presence of identical sensors and channels. We prove that the optimal quantizer can be implemented as a likelihood ratio test, whose threshold depends on the specified secrecy constraint on the Eve. We present an algorithm to find the optimal threshold in the case of Additive White Gaussian Noise (AWGN) observation models at the sensors. In the numerical results, we discuss the tradeoff between the distributed detection performance and the secrecy constraint on the eavesdropper. We show how the system behavior varies as a function of the secrecy constraint imposed on Eve. Finally, we also investigate the problem of designing the quantizers for a distributed detection network with non-identical sensors and channels. We decompose the problem into $N$ sequential problems using dynamic programming, where each individual problem has the same structure as the scenario with identical sensors and channels. Optimum binary quantizers are obtained. Numerical results are presented for illustration.

by <a href="">V. Sriram Siddhardh Nadendla</a>, <a href="">Pramod K. Varshney</a> at October 30, 2014 01:30 AM

A Dynamic Network Formation Model for Understanding Bacterial Self-Organization into Micro-Colonies. (arXiv:1410.8091v1 [q-bio.PE])

We propose a general parametrizable model to capture the dynamic interaction among bacteria in the formation of micro-colonies. micro-colonies represent the first social step towards the formation of structured multicellular communities known as bacterial biofilms, which protect the bacteria against antimicrobials. In our model, bacteria can form links in the form of intercellular adhesins (such as polysaccharides) to collaborate in the production of resources that are fundamental to protect them against antimicrobials. Since maintaining a link can be costly, we assume that each bacterium forms and maintains a link only if the benefit received from the link is larger than the cost, and we formalize the interaction among bacteria as a dynamic network formation game. We rigorously characterize some of the key properties of the network evolution depending on the parameters of the system. In particular, we derive the parameters under which it is guaranteed that all bacteria will join micro-colonies and the parameters under which it is guaranteed that some bacteria will not join micro-colonies. Importantly, our study does not only characterize the properties of networks emerging in equilibrium, but it also provides important insights on how the network dynamically evolves and on how the formation history impacts the emerging networks in equilibrium. This analysis can be used to develop methods to influence on- the-fly the evolution of the network, and such methods can be useful to treat or prevent biofilm-related diseases.

by <a href="">Luca Canzian</a>, <a href="">Kun Zhao</a>, <a href="">Gerard C. L. Wong</a>, <a href="">Mihaela van der Schaar</a> at October 30, 2014 01:30 AM

Malware "Ecology" Viewed as Ecological Succession: Historical Trends and Future Prospects. (arXiv:1410.8082v1 [cs.CR])

The development and evolution of malware including computer viruses, worms, and trojan horses, is shown to be closely analogous to the process of community succession long recognized in ecology. In particular, both changes in the overall environment by external disturbances, as well as, feedback effects from malware competition and antivirus coevolution have driven community succession and the development of different types of malware with varying modes of transmission and adaptability.

by <a href="">Reginald D. Smith</a> at October 30, 2014 01:30 AM

ProbReach: Verified Probabilistic Delta-Reachability for Stochastic Hybrid Systems. (arXiv:1410.8060v1 [cs.LO])

We present ProbReach, a tool for verifying probabilistic reachability for stochastic hybrid systems, i.e., computing the probability that the system reaches an unsafe region of the state space. In particular, ProbReach will compute an arbitrarily small interval which is guaranteed to contain the required probability. Standard (non-probabilistic) reachability is undecidable even for linear hybrid systems. In ProbReach we adopt the weaker notion of delta-reachability, in which the unsafe region is overapproximated by a user-defined parameter (delta). This choice leads to false alarms, but also makes the reachability problem decidable for virtually any hybrid system. In ProbReach we have implemented a probabilistic version of delta-reachability that is suited for hybrid systems whose stochastic behaviour is given in terms of random initial conditions. In this paper we introduce the capabilities of ProbReach, give an overview of the parallel implementation, and present results for several benchmarks involving highly non-linear hybrid systems.

by <a href="">Fedor Shmarov</a>, <a href="">Paolo Zuliani</a> at October 30, 2014 01:30 AM

OFDM Transmission Performance Evaluation in V2X Communication. (arXiv:1410.8039v1 [cs.NI])

The Vehicle to Vehicle and Vehicle to Infrastructure V2X communication systems are one of the main topics in research domain. Its performance evaluation is an important step before their on board integration into vehicles and its probable real deployment. This paper studies the physical layer PHY of the upcoming vehicular communication standard IEEE 802.11p. This standard PHY Layer model, with much associated phenomena, is implemented in V2V and V2I, situations through different scenarios. The series of simulation results carried out, perform data exchange between high speed vehicles over different channels models and different transmitted packet size. We underline several propagation channel and other important parameters, which affect both the physical layer network performance and the QoT. The Bit Error Rate BER versus Signal to Noise Ratio SNR of all coding rates is used to evaluate the performance of the communication.

by <a href="">Aymen Sassi</a>, <a href="">Faiza Charfi</a>, <a href="">Lotfi Kamoun</a>, <a href="">Yassin Elhillali</a>, <a href="">Atika Rivenq</a> at October 30, 2014 01:30 AM

Towards a Visual Turing Challenge. (arXiv:1410.8027v1 [cs.AI])

As language and visual understanding by machines progresses rapidly, we are observing an increasing interest in holistic architectures that tightly interlink both modalities in a joint learning and inference process. This trend has allowed the community to progress towards more challenging and open tasks and refueled the hope at achieving the old AI dream of building machines that could pass a turing test in open domains. In order to steadily make progress towards this goal, we realize that quantifying performance becomes increasingly difficult. Therefore we ask how we can precisely define such challenges and how we can evaluate different algorithms on this open tasks? In this paper, we summarize and discuss such challenges as well as try to give answers where appropriate options are available in the literature. We exemplify some of the solutions on a recently presented dataset of question-answering task based on real-world indoor images that establishes a visual turing challenge. Finally, we argue despite the success of unique ground-truth annotation, we likely have to step away from carefully curated dataset and rather rely on 'social consensus' as the main driving force to create suitable benchmarks. Providing coverage in this inherently ambiguous output space is an emerging challenge that we face in order to make quantifiable progress in this area.

by <a href="">Mateusz Malinowski</a>, <a href="">Mario Fritz</a> at October 30, 2014 01:30 AM

Understanding the Mechanics of Some Localized Protocols by Theory of Complex Networks. (arXiv:1410.8007v1 [cs.NI])

In the study of ad hoc sensor networks, clustering plays an important role in energy conservation therefore analyzing the mechanics of such topology can be helpful to make logistic decisions .Using the theory of complex network the topological model is extended, where we account for the probability of preferential attachment and anti preferential attachment policy of sensor nodes to analyze the formation of clusters and calculate the probability of clustering. The theoretical analysis is conducted to determine nature of topology and quantify some of the observed facts during the execution of topology control protocols. The quantification of the observed facts leads to the alternative understanding of the energy efficiency of the routing protocols.

by <a href="">Chiranjib Patra</a>, <a href="">Samiran Chattopadhyay</a>, <a href="">Matangini Chattopadhyay</a>, <a href="">Parama Bhaumik</a> at October 30, 2014 01:30 AM

Linked Data Integration with Conflicts. (arXiv:1410.7990v1 [cs.DB])

Linked Data have emerged as a successful publication format and one of its main strengths is its fitness for integration of data from multiple sources. This gives them a great potential both for semantic applications and the enterprise environment where data integration is crucial. Linked Data integration poses new challenges, however, and new algorithms and tools covering all steps of the integration process need to be developed. This paper explores Linked Data integration and its specifics. We focus on data fusion and conflict resolution: two novel algorithms for Linked Data fusion with provenance tracking and quality assessment of fused data are proposed. The algorithms are implemented as part of the ODCleanStore framework and evaluated on real Linked Open Data.

by <a href="">Jan Michelfeit</a>, <a href="">Tom&#xe1;&#x161; Knap</a>, <a href="">Martin Ne&#x10d;ask&#xfd;</a> at October 30, 2014 01:30 AM

A novel wireless sensor network topology with fewer links. (arXiv:1410.7955v1 [cs.NI])

This paper, based on $k$-NN graph, presents symmetric $(k,j)$-NN graph $(1 \leq j < k)$, a brand new topology which could be adopted by a series of network-based structures. We show that the $k$ nearest neighbors of a node exert disparate influence on guaranteeing network connectivity, and connections with the farthest $j$ ones among these $k$ neighbors are competent to build up a connected network, contrast to the current popular strategy of connecting all these $k$ neighbors. In particular, for a network with node amount $n$ up to $10^3$, as experiments demonstrate, connecting with the farthest three, rather than all, of the five nearest neighbor nodes, i.e. $(k,j)=(5,3)$, can guarantee the network connectivity in high probabilities. We further reveal that more than $0.75n$ links or edges in $5$-NN graph are not necessary for the connectivity. Moreover, a composite topology combining symmetric $(k,j)$-NN and random geometric graph (RGG) is constructed for constrained transmission radii in wireless sensor networks (WSNs) application.

by <a href="">Jie Ding</a>, <a href="">Min-Yi Wang</a>, <a href="">Qiao Wang</a>, <a href="">Xin-Shan Zhu</a> at October 30, 2014 01:30 AM

On the equivalence of state transformer semantics and predicate transformer semantics. (arXiv:1410.7930v1 [cs.LO])

G. Plotkin and the author have worked out the equivalence between state transformer semantics and predicate transformer semantics in a domain theoretical setting for programs combining nondeterminism and probability. Works of C. Morgan and co-authors, Keimel, Rosenbusch and Streicher, already go in the same direction using only discrete state spaces. It is the aim of this paper to exhibit a general framework in which one can hope that state transformer semantics and predicate transformer semantics are equivalent. We use a notion of entropicity borrowed from universal algebra and a relaxed setting adapted to the domain theoretical situation.

by <a href="">Klaus Keimel</a> at October 30, 2014 01:30 AM

A Description of the Subgraph Induced at a Labeling of a Graph by the Subset of Vertices with an Interval Spectrum. (arXiv:1410.7927v1 [cs.DM])

The sets of vertices and edges of an undirected, simple, finite, connected graph $G$ are denoted by $V(G)$ and $E(G)$, respectively. An arbitrary nonempty finite subset of consecutive integers is called an interval. An injective mapping $\varphi:E(G)\rightarrow \{1,2,...,|E(G)|\}$ is called a labeling of the graph $G$. If $G$ is a graph, $x$ is its arbitrary vertex, and $\varphi$ is its arbitrary labeling, then the set $S_G(x,\varphi)\equiv\{\varphi(e)/ e\in E(G), e \textrm{is incident with} x$\} is called a spectrum of the vertex $x$ of the graph $G$ at its labeling $\varphi$. For any graph $G$ and its arbitrary labeling $\varphi$, a structure of the subgraph of $G$, induced by the subset of vertices of $G$ with an interval spectrum, is described.

by <a href="">Narine N. Davtyan</a>, <a href="">Arpine M. Khachatryan</a>, <a href="">Rafayel R. Kamalian</a> at October 30, 2014 01:30 AM

Implementation and Experimental Evaluation of a Collision-Free MAC Protocol for WLANs. (arXiv:1410.7924v1 [cs.NI])

Collisions are a main cause of throughput degradation in Wireless LANs. The current contention mechanism for these networks is based on a random backoff strategy to avoid collisions with other transmitters. Even though it can reduce the probability of collisions, the random backoff prevents users from achieving Collision-Free schedules, where the channel would be used more efficiently. Modifying the contention mechanism by waiting for a deterministic timer after successful transmissions, users would be able to construct a Collision-Free schedule among successful contenders. This work shows the experimental results of a Collision-Free MAC (CF-MAC) protocol for WLANs using commercial hardware and open firmware for wireless network cards which is able to support many users. Testbed results show that the proposed CF-MAC protocol leads to a better distribution of the available bandwidth among users, higher throughput and lower losses than the unmodified WLANs clients using a legacy firmware.

by <a href="">Luis Sanabria-Russo</a>, <a href="">Francesco Gringoli</a>, <a href="">Jaume Barcelo</a>, <a href="">Boris Bellalta</a> at October 30, 2014 01:30 AM

Proceedings Eleventh Workshop on User Interfaces for Theorem Provers. (arXiv:1410.7850v1 [cs.LO])

The UITP workshop series brings together researchers interested in designing, developing and evaluating user interfaces for automated reasoning tools, such as interactive proof assistants, automated theorem provers, model finders, tools for formal methods, and tools for visualising and manipulating logical formulas and proofs. The eleventh edition of UITP took place in Vienna, Austria, and was part of the Vienna Summer of Logic, the largest ever joint conference in the area of Logic. This proceedings contains the eight contributed papers that were accepted for presentation at the workshop as well as the two invited papers.

by <a href="">Christoph Benzm&#xfc;ller</a>, <a href="">Bruno Woltzenlogel Paleo</a> at October 30, 2014 01:30 AM

Tree simplification and the 'plateaux' phenomenon of graph Laplacian eigenvalues. (arXiv:1410.7842v1 [cs.DM])

We developed a procedure of reducing the number of vertices and edges of a given tree, which we call the "tree simplification procedure," without changing its topological information. Our motivation for developing this procedure was to reduce computational costs of graph Laplacian eigenvalues of such trees. When we applied this procedure to a set of trees representing dendritic structures of retinal ganglion cells of a mouse and computed their graph Laplacian eigenvalues, we observed two "plateaux" (i.e., two sets of multiple eigenvalues) in the eigenvalue distribution of each such simplified tree. In this article, after describing our tree simplification procedure, we analyze why such eigenvalue plateaux occur in a simplified tree, and explain such plateaux can occur in a more general graph if it satisfies a certain condition, identify these two eigenvalues specifically as well as the lower bound to their multiplicity.

by <a href="">Naoki Saito</a>, <a href="">Ernest Woei</a> at October 30, 2014 01:30 AM

Energy-Aware Lease Scheduling in Virtualized Data Centers. (arXiv:1410.7815v1 [cs.DC])

Energy efficiency has become an important measurement of scheduling algorithms in virtualized data centers. One of the challenges of energy-efficient scheduling algorithms, however, is the trade-off between minimizing energy consumption and satisfying quality of service (e.g. performance, resource availability on time for reservation requests). We consider resource needs in the context of virtualized data centers of a private cloud system, which provides resource leases in terms of virtual machines (VMs) for user applications. In this paper, we propose heuristics for scheduling VMs that address the above challenge. On performance evaluation, simulated results have shown a significant reduction on total energy consumption of our proposed algorithms compared with an existing First-Come-First-Serve (FCFS) scheduling algorithm with the same fulfillment of performance requirements. We also discuss the improvement of energy saving when additionally using migration policies to the above mentioned algorithms.

by <a href="">Nguyen Quang-Hung</a>, <a href="">Nam Thoai</a>, <a href="">Nguyen Thanh Son</a>, <a href="">Duy-Khanh Le</a> at October 30, 2014 01:30 AM

Unified spectral bounds on the chromatic number. (arXiv:1210.7844v5 [math.CO] UPDATED)

One of the best known results in spectral graph theory is the following lower bound on the chromatic number due to Alan Hoffman, where mu_1 and mu_n are respectively the maximum and minimum eigenvalues of the adjacency matrix: chi >= 1 + mu_1 / (- mu_n). We recently generalised this bound to include all eigenvalues of the adjacency matrix.

In this paper, we further generalize these results to include all eigenvalues of the adjacency, Laplacian and signless Laplacian matrices. The various known bounds are also unified by considering the normalized adjacency matrix, and examples are cited for which the new bounds outperform known bounds.

by <a href="">Clive Elphick</a>, <a href="">Pawel Wocjan</a> at October 30, 2014 01:30 AM

The Computational Complexity of Disconnected Cut and 2K2-Partition. (arXiv:1104.4779v3 [cs.CC] UPDATED)

For a connected graph G=(V,E), a subset U of V is called a disconnected cut if U disconnects the graph and the subgraph induced by U is disconnected as well. We show that the problem to test whether a graph has a disconnected cut is NP-complete. This problem is polynomially equivalent to the following problems: testing if a graph has a 2K2-partition, testing if a graph allows a vertex-surjective homomorphism to the reflexive 4-cycle and testing if a graph has a spanning subgraph that consists of at most two bicliques. Hence, as an immediate consequence, these three decision problems are NP-complete as well. This settles an open problem frequently posed in each of the four settings.

by <a href="">Barnaby Martin</a>, <a href="">Daniel Paulusma</a> at October 30, 2014 01:30 AM


How to compute annuity payment? [on hold]

I am trying to answer a question that I already know the answer to but I don't know how they got there. The question is:

Your subscription to Consumer Reports is about to expire. You may renew it for \$24 a year or, instead, you may get a lifetime subscription to the magazine for a onetime payment of \$400 today. Payments for the regular subscription are made at the beginning of each year. Using a discount rate of 5%, how many years does it take to make the lifetime subscription the better deal?

  1. 25 years
  2. 28 years
  3. 30 years
  4. 40 years

The answer is:

BGN mode
PV 400
FV 0
PMT -24
I/Y 5
N=32.3 (C)

I have no idea how they got that number.

I have tried it on my ti-84 several times and I am getting 16 years for my N, not 32.

What equation are they using?

by Richard at October 30, 2014 01:25 AM


Planet Clojure

Onyx 0.4.0: DAGs, catalog grouping, lib-onyx


I’m pleased to announce the release of Onyx 0.4.0. It’s been about 6 weeks since Onyx was released at StrangeLoop. I’ve been quiet on the blog, but steadily grinding out new features and bug fixes. Onyx is already starting to make it’s way into non-critical production systems in the field. This release of Onyx is a massive step forward, shipping fundamental advancements to the information model. The release notes are here, but I’d like to take you on a tour of exactly what’s new myself.

Directed Acyclic Graph Workflows

The biggest change coincides with Onyx’s workflow representation. Originally, Onyx’s workflow specifications were strictly outward branching trees. Starting in 0.4.0, you can also model directed, acyclic graph workflows to join data flows back together using a vector-of-vectors. This new type of workflow makes it natural to write streaming joins on your data set. Onyx remains fully backwards compatible with the original version. Check out the comparison in the docs.


A DAG example. Inputs A, B, and C. Outputs J, K, and L. All other nodes are functions.

Catalog-level Grouping

Prior to 0.4.0, Onyx featured grouping and aggregation through special elements in a workflow. In an email exchange with David Greenberg, it suggested that grouping could instead by specified inside of the catalog entry. I gave this some serious thought, and realized that grouping at the level of a workflow is a form of structural complecting. Data flow representation ought to be orthogonal to how Onyx pins particular segments across different virtual peers. I present to you two new ways to do grouping in a fully data driven manner: automatic grouping by key, and grouping by arbitrary function. Aggregation now becomes an implicit, user level activity at each task site in the workflow. This change significantly refines Onyx’s information model with respect to keeping anything that’s not structure out of the workflow. Thanks David!


An Onyx workflow for word count. Note the :onyx/group-by-key :word association. This automatically pins all segments with the same value for :word to the same virtual peer. That means each peer gets all segments with a particular value assigned to it, and the “count” of each word is correct with respect to the entire data set.


My final piece of exciting news is the announcement of a new supporting library - lib-onyx. Onyx has been built from the ground up on the notion that you can combine a sound information model with the full power of Clojure - everywhere. It’s no surprise that shortly after launching, reusable idioms started springing up across different code bases. lib-onyx is a library that packages up common techniques for doing operations like in-memory streaming joins, automatic message retry, and interval-based actions. You can use all of these operations today by adding lib-onyx to your project and adding a few extra key/values to your catalog entry. Contrast this composability with Apache Storm’s Tick Tuple feature. Just like core.async changed the Clojure world without touching the language itself, neither did Onyx need any implementation adjustments. lib-onyx is just a library.


That’s all I have for now. Thank you so much to everyone who has helped me since Onyx launched. I’d like to thank Bruce Durling, Malcolm Sparks, Lucas Bradstreet, and Bryce Blanton for their contributions to the 0.4.0 release. Now we turn our attention to 0.5.0 - especially exciting things will be happening in the next few months. Stay tuned, friends!

by Michael Drogalis at October 30, 2014 01:19 AM

DragonFly BSD Digest

DragonFly DRM1 drivers dropped

As Francois Tigeot has pointed out, recent Mesa upgrades have made very old graphics drivers using DRM1 no longer work.  They’ve been removed.  This won’t affect you unless your graphics card is 10+ years old.

by Justin Sherrill at October 30, 2014 01:15 AM


Portland Pattern Repository



How to calculate the Sharpe ratio from a 5-year historical database with daily closing prices? [on hold]

So let's say I have a database with the closing price of a stock, every day, for the last 5 years. From that info, how can I calculate the Sharpe ratio? I know the formula for the sharpe ration but I am looking for a practical example. How do I calculate the terms of the formula if I have the last price historic data?

enter image description here

by TraderJenny at October 30, 2014 12:52 AM


Which algorithm to use to find all common substring (LCS case) with really big strings

I'm looking for a particular case of longest common substring (LCS) problem. In my case I have two really big strings (tens or hundreds of milions byte characters) and need to find the LCS and other long strings.

A simplified example



the LCS = CADGGH (6 chars) and other long strings with 5 chars are CADGG, ADGGH and EASSS.

Which is the fastest algorithm to get all substrings with its length? (list all substrigs and legths) And in my case (very big byte substrings) which is the fastest LCS algorithm? (only get longest common substrings).

NOTE: In particular I don't have any space limit now, but this algorithm may be implemented in a mobile device in a future and is possible to have a very limited RAM/disk space (but always, at least, I have the same disk space available as the sum of file lengths).

by Ivan at October 30, 2014 12:52 AM

Planet Theory

Online Top-k-Position Monitoring of Distributed Data Streams

Authors: Alexander Mäcker, Manuel Malatyali, Friedhelm Meyer auf der Heide
Download: PDF
Abstract: Consider n nodes connected to a single coordinator. Each node receives an individual online data stream of numbers and, at any point in time, the coordinator has to know the k nodes currently observing the largest values, for a given k between 1 and n. We design and analyze an algorithm that solves this problem while bounding the amount of messages exchanged between the nodes and the coordinator. Our algorithm employs the idea of using filters which, intuitively speaking, leads to few messages to be sent, if the new input is "similar" to the previous ones. The algorithm uses a number of messages that is on expectation by a factor of O((log {\Delta} + k) log n) larger than that of an offline algorithm that sets filters in an optimal way, where {\Delta} is upper bounded by the largest value observed by any node.

October 30, 2014 12:40 AM

Optimal Online Edge Coloring of Planar Graphs with Advice

Authors: Jesper W. Mikkelsen
Download: PDF
Abstract: We study the amount of knowledge about the future that an online algorithm needs to color the edges of a graph optimally (i.e. using as few colors as possible). Assume that along with each edge, the online algorithm receives a fixed number of advice bits. For graphs of maximum degree $\Delta$, it follows from Vizing's Theorem that $\lceil \log(\Delta+1)\rceil$ bits per edge suffice to achieve optimality. We show that even for bipartite graphs, $\Omega(\log \Delta)$ bits per edge are in fact necessary. However, we also show that there is an online algorithm which can color the edges of a $d$-degenerate graph optimally using $O(\log d)$ bits of advice per edge. It follows that for planar graphs and other graph classes of bounded degeneracy, only $O(1)$ bits per edge are needed, independently of how large $\Delta$ is.

October 30, 2014 12:40 AM

Errata for: A subexponential lower bound for the Random Facet algorithm for Parity Games

Authors: Oliver Friedmann, Thomas Dueholm Hansen, Uri Zwick
Download: PDF
Abstract: In Friedmann, Hansen, and Zwick (2011) we claimed that the expected number of pivoting steps performed by the Random-Facet algorithm of Kalai and of Matousek, Sharir, and Welzl is equal to the expected number of pivoting steps performed by Random-Facet^*, a variant of Random-Facet that bases its random decisions on one random permutation. We then obtained a lower bound on the expected number of pivoting steps performed by Random-Facet^* and claimed that the same lower bound holds also for Random-Facet. Unfortunately, the claim that the expected numbers of steps performed by Random-Facet and Random-Facet^* are the same is false. We provide here simple examples that show that the expected numbers of steps performed by the two algorithms are not the same.

October 30, 2014 12:40 AM



Finding cache block transfer time in a 3 level memory system

Following question was asked in one of entrance exams for a graduation programme. Please help me try to solve it :

A computer system has an L1 cache, an L2 cache, and a main memory unity connected as shown below. The block size in L1 cache is 4 words. The block size in L2 cache is 16 words. The memory access times are 2 nanoseconds, 20 nanoseconds and 200 nanoseconds for L1 cache, L2 cache and main memory unit respectively.


  1. When there is a miss in L1 cache and a hit in L2 cache, a block is transferred from L2 cache to L1 cache. What is the time taken for this transfer?

(A) 2 nanoseconds (B) 20 nanoseconds (C) 22 nanoseconds (D) 88 nanoseconds

  1. When there is a miss in both L1 cache and L2 cache, first a block is transferred from main memory to L2 cache, and then a block is transferred from L2 cache to L1 cache. What is the total time taken for these transfers?

(A) 222 nanoseconds (B) 888 nanoseconds (C) 902 nanoseconds (D) 968 nanoseconds

First thing that came to my mind was, how to calculate the transfer time using the given access time. During a miss, a block of data is moved from main memory to cache. Then CPU will access it. So, wouldn't be access time > transfer time ?

Then I thought, lets assume access time = transfer time & do the calculation.

Now first question. The question already states there is a miss in L1, so I will not consider L1 access time. Since there is a miss in L1 & hit in L2, a entire block from L2 has to be moved to L1. L2 block size is 16 words, but data bus size is 4 words.

So we have to move 4 words * 4 times.

To transfer 4 word it takes 20 ns. To transfer 4 words, its 80ns. Isn't it the time transferred from L2 to L1 ? The question does not say anything about accessing L1 after moving the data. But 80ns is not in the option !

Similar case with second question also.

Time to move main memory to L2 = 4 words * 4 times = 4 * 200 = 800ns

Time to move L2 to L1 = 80ns [earlier calculation]

So total time taken is 880ns. Which is again not in the option.

Either I am doing a very big mistake or options are wrong or question isn't framed correctly. If I am doing anything wrong, please give me some hint & I will try to work on this exercise again.

by avi at October 30, 2014 12:24 AM


How to clean up "a type was inferred to be `Any`" warning?

I have the following code:

class TestActor() extends RootsActor() {

  // Receive is a type resolving to PartialFunction[Any, Unit]
  def rec2 : Actor.Receive = {   
    case "ping" => println("Ping received!!!")

  def recAll = List(super.receive, rec2)

  // Compose parent class' receive behavior with this class' receive
  override def receive = recAll.reduceLeft { (a,b) => a orElse b }

This functions correctly when run, but it produces the following warning:

[warn] /Users/me/git/proj/roots/src/multi-jvm/scala/stuff/TestActor.scala:18: a type was inferred to be `Any`; this may indicate a programming error.
[warn]  override def receive = recAll.reduceLeft { (a,b) => a orElse b }
[warn]                                                   ^

How can I change this code to clean up the warning?

by Greg at October 30, 2014 12:05 AM

Stack overflow. Function fails to stop at base case

I have a class called Rational:

class Rational(x:Int,y:Int){

def numer=x
def denom=y

def add(r: Rational) =
new Rational(numer* r.denom+ r.numer* denom,
denom* r.denom)

override def toString= numer+ "/"+ denom
def neg = new Rational(-numer, denom)
def -(that: Rational) = add(that.neg)
def < (that:Rational)=numer * that.denom < that.numer * denom
def max(that:Rational)=
if (this < (that)) that else this

def / (r: Rational) = new Rational(numer * r.denom, denom * r.numer)
def numerMIn = new Rational(numer -1, denom)
def denomMIn = new Rational(numer, denom-1)


I've written this function, which is a subfunction of a bigger one. I'm just debugging it. Assume b is the Rational 3/4 and we pass the Rational 2/4 into iter.

def iter(c:Rational): Rational={

        if (c.denom==0) new Rational(0,0) else if(!(c < b)) denomDec(c) else c add iter(denomDec(c))


And here's denomDec:

def denomDec(r: Rational) = new Rational(r.numer, r.denom-1)

The problem is that the iter function does not stop and runs into Stack Overflow.

Here's what happens inside iter(). It gets 2/4 as argument. Then itake 2/4 + 2/3 + when it reaches 2/2, it calls denomDec to get 2/1 after 2/2 because the latter fails to be smaller than 3/4. Then denomDec gets to 2/0 and here it is supposed to return new Rational(0,0) but it fails to do so which in its turn causes Stack Overflow.

My question is that why doesn't the function stop when recurring when it hits the base case which is if denomDec(r: Rational) = new Rational(r.numer, r.denom-1)

For clarification: example iter(2/4) should do this: 2/4 + 2/3 + 0/0. 2/2 and 2/1 are skipped as they are both greater than 3/4.

by John Peterson at October 30, 2014 12:00 AM

Planet Clojure

What language should you learn?

I travel a lot these days. I'd call myself a “digital nomad” as a shorthand, if there was any way to say it without sounding impossibly smug. Let's just say I'm homeless but employed and my wife and I live in AirBnbs.

One of the challenges of moving around so much is dealing with language barriers. For the most part, even in places where English isn't widely understood, it's perfectly possible to get whatever you need with gestures, chief among them pointing and holding up money. It's the little things that are harder when you can't speak the language.

By way of example, I spent much of today wandering the streets of Istanbul in search of somewhere I could buy a simple envelope, because it turns out that without a Staples around I'm completely incapable of purchasing office supplies. I bet somewhere there's a whole bazaar full of old men with long grey beards flaunting staplers and paper clips – that's how it seems to go here – but I didn't happen across anyone who spoke enough English to ask.

So, I really prefer having at least some grasp of the language of wherever I'm going, but learning a language is pretty tough work, so being a nerd as well as a clueless putz I decided today to compute emperically which languages are the best to learn, if one were to, hypothetically, travel around randomly on the basis of where has the cheapest airline fares from here.

Get to the point, dammit.

I've been wondering this for a little while, but it turns out to be a trending topic on Quora today too, so I thought I might as well put the effort in and just get some numbers. Specifically, one of the answers on Quora linked to this page, listing the most widely spoken languages in the world along with which countries they're spoken in. It might seem that most of the work is done here, but the problem not addressed by this list is that of overlap – if you came to Canada after learning French on the strength of this list, you'd probably be mighty disappointed.

The part of the process where we debate the merits of different ways of measuring the number of speakers of a language in which country was definitely something I wanted to avoid, so this is perfect. We have a simple job: take this data, and figure out which sets of N languages cover the most countries.

Source Code

If you're interested in the code, I've put the commented clojure source in this gist for your perusal.

The Results + Discussion

If you can only learn two languages, you should learn English and French. Here are the top pairings:

Pairing# Countries
English, French95
Arabic, English91
English, Turkic90
English, Spanish89
English, Portuguese79
English, Russian79
Persian, English77
German, English75
Italian, English74
Dutch, English73
Chinese, English71
Indonesian, English71
Tamil, English71
English, Swedish70
English, Romanian70
Bengali, English69
English, Hindi69
Turkic, French54

I took an arbitrary sample there because it's interesting to me that the top pairing without English (Turkic + French) gets you by in significantly fewer countries than just English. Lucky us.

Boringly, the top result is what you would predict from the Wikipedia article anyhow. I thought there might be more overlap between English and French, but perhaps that's just because I'm so used to it being Canadian. Actually, most of the results are just English + (other languages in descending order of speakers).

However, this is good news for us native-english-speakers: French and English actually overlap a lot linguistically. About 30% of English words have French roots.

What about learning three languages? Perhaps the results of that will be less boring. If you're more on-the-ball, here's how you'll do:

Languages# Countries
English, Turkic, French117
English, Spanish, French115
Arabic, English, French114
Arabic, English, Spanish112
English, Spanish, Turkic111
Arabic, English, Turkic111
English, Russian, French106
English, Portuguese, French105
Persian, English, French104
Arabic, English, Portuguese102
English, Turkic, Portuguese101
Arabic, English, Russian101
English, Russian, Spanish100
Italian, English, French100

There you have it: your third language should be Turkic. It makes sense, given the small overlap between Arabic and French in northwest Africa.

I'm most intrigued by the English-Spanish-French pairing, actually. There's a lot of overlap between Spanish and French too, so this is almost certainly the easiest triple to learn for native English speakers.

So there you have it: learn you some French. Bonne chance, et au revoir!

by Adam Bard at October 30, 2014 12:00 AM

HN Daily

October 29, 2014


Using spray client to make REST web service calls inside an Actor system

I've an Actor system that is processing a continuous stream messages from an external system. I've the following actors in my system.

  1. SubscribeActor - this actor subscribes to a Redis channel and creates a new InferActor and passes the JSON payload to it.
  2. InferenceActor - this actor is responsible for 2a. parsing the payload and extracting some value text values from the JSON payload. 2b. calling an external REST service to passing the values extracted in 2a to this service. The REST service is deployed on a different node in the LAN and does some fair bit of heavy lifting in terms of computation.

The external REST service in 2b is invoked using a Spray client. I tested the system and it works fine till 2a. However, as soon as I introduce 2b. I start to get OutOfMemory errors and the system eventually comes to a halt.

Currently, I've two primary suspects -

  1. Design flaw - The way I'm using the Spray client inside my actor system is not correct (I'm new to Spray)
  2. Performance issues due the latency caused by the slow REST service.

Before I go to #2 I want to make sure that I'm using the Spray client correctly, esp. when I'm calling it from other actors. My question is the usage below correct/incorrect/suboptimal ?

Here is the code of the web service REST client that invokes the service.

trait GeoWebClient {
  def get(url: String, params: Map[String, String]): Future[String]

class GeoSprayWebClient(implicit system: ActorSystem) extends GeoWebClient {

  import system.dispatcher

  // create a function from HttpRequest to a Future of HttpResponse
  val pipeline: HttpRequest => Future[HttpResponse] = sendReceive

  // create a function to send a GET request and receive a string response
  def get(path: String, params: Map[String, String]): Future[String] = {

    val uri = Uri("http://myhost:9191/infer") withQuery params
    val request = Get(uri)
    val futureResponse = pipeline(request)

And here is the code for InferenceActor that invokes the service above.

class InferenceActor extends Actor with ActorLogging with ParseUtils {

  val system = context.system    
  import system.dispatcher    
  val restServiceClient = new GeoSprayWebClient()(system)    

  def receive = {

    case JsonMsg(s) => {

      //first parse the message to 
      val text: Option[String] = parseAndExtractText(s) //defined in ParseUtils trait"extract text $text")

      def sendReq(text: String) = {
        import spray.http._
        val params = Map(("text" -> text))
        // send GET request with absolute URI
        val futureResponse = restServiceClient.get("http://myhost:9191/infer", params)

      val f: Option[Future[String]] = => sendReq(x))

      // wait for Future to complete NOTE: I commented this code without any change. 
      /* f.foreach { r => r.onComplete {
        case Success(response) => log.debug("*********************" + response)
        case Failure(error) =>"An error has occurred: " + error.getMessage)
      context stop self    


by Soumya Simanta at October 29, 2014 11:59 PM

In scala, Can I say Future and Promise are also kind of monad?

I have been struggling into understanding of monad.

And, I concluded monad is box of values in which operates some specific task.

so, Can I say Future and Promise are also kind of monad?

by Notice at October 29, 2014 11:43 PM


What does an NFA do if there's no transition with the correct symbol?

So I am learning about DFA and NFA, and I need some clarification for it.


  • accept empty set
  • transition for every element in the alphabet
  • path are deterministic


  • accept the empty set and empty string
  • path are not deterministic

My question is does NFA need a transition for every element in the alphabet? If not, then let say the alphabet is {0. 1} and I am at a state without the transition for 1, do I go to the empty state or something?

by holidayeveryday at October 29, 2014 11:39 PM

Algorithm for rope burning problem [on hold]

Generalized from a technical interview question:

Original question: There are two ropes, each rope takes 1 hour to burn. But either rope has different densities at different points, so there’s no guarantee of consistency in the time it takes different sections within the rope to burn.

How do you use these two ropes to measure 45 minutes?

I have a generalized version:

There are n ropes, each rope takes x minutes to burn (for simplicity assume x is positive integer). But the ropes have different densities at different points, so there’s no guarantee of consistency in the time it takes different sections within the ropes to burn.

Using these n ropes, what time quantity can you measure?

For example, with n = 1 and x = 60, I can measure 60 minute period (burning one end of the rope), or 30 minute period (burning both ends of the rope at the same time)

Of course my aim would be finding an algorithm with minimal complexity. I imagine the solution to this would involve dynamic programming, but I am not quite sure. My brute force solution is as followed:

  1. Start at minute 0, we have n ropes, each takes x minutes to burn. For a given rope, we have choices either to burn both ends, one end, or not burning the rope at all. Let number of ropes that will not be burnt at this stage be x, number of ropes that will be burnt one end be y, and number of ropes that will not be burnt at all be z. We have x + y + z = n and that x,y,z are positive integers and z != 0. Consider all possible cases for x, y and z and add those cases to a stack/queue.
  2. For each item in the stack/queue, determine how many minutes have passed when there is a rope finishes burning. Output the time that has passed (calculated based on how long the finished rope has burnt, and which ends were burnt at what time). Now we have another scenarios with certain amount of ropes that are being burnt. Repeat the step 1 argument with x + y + z = n - 1 (with constraints imposed on x, y, and z since some ropes are still burning and we cannot set the fire off) and add all the newly generated cases to the stack/queue.
  3. Repeat 2. until n = 0 (All ropes finished burning)

Edit: It was mentioned that my question was uncleared. My question was: given n ropes, each takes x minutes to burn, find an algorithm to output all possible periods of time that can be measured using these n ropes. What would the complexity of the algorithm be?

by vda8888 at October 29, 2014 11:29 PM


Need help with these discrete math problems.

  • Use backward substitution to solve the following recurrence equations. Give the big-oh notation for each function.

o T(n) = 4T(n-1), T(1) = 4

o T(n) = T(n-1) + 2n, T(0) = 0

submitted by In-nox
[link] [1 comment]

October 29, 2014 11:26 PM


Interactive example of Dynamic scoping and evaluation order of expressions

Given the following (arbitrary language, although I think it is close to Algol 60) program:

program main;                               // A main parent level
  var i : integer;                          // A 'global' variable

  (* Note that all parameters are passed by value here *)

  function f1 (j : integer) : integer;      // A Child function
  begin { f1 }
    i := i + 3;
    f1 := 2 * j - i;
  end; { f1 }

  function f2 (k : integer) : integer;      // Another Child function, same level as f1
    var i : integer;                        // Here, there is a variable that is declared
  begin { f2 }                                 // but no value assigned
    i := k / 2;
    f2 := f1(i) + f1(k);
  end; { f2 }

begin { main }                              // Running/Calling/Executing the code
  i := 8;
  i := i + f2(i);
end. { main }

How would you trace the values of variables throughout the program when it is interpreted using Dynamic scoping of free variables, when the arguments appearing in expressions are evaluated left to right, and when they are evaluated right to left, so that the user can watch what happens

I have created a JS plnkr for Static Scoping with Left to Right evaluation and another for Static Scoping with Right to Left evaluation. Feel free to adapt these answers (if possible) for Dynamic Scoping, with L->R and R->L evaluation.

I chose plnkrs because I knew I could get the Static/lexical side using JS, but am unsure of how to make it happen dynamically or in another interactive environment (preferably not one I have to install).

I learn a bit slower on some problems like this where the values are asked of the output, but don't show the value states throughout the program, and trying to get a better understanding, especially in an example I can play around with interactively, as the book examples are really bad. In the code above, it also gets challenging, because it appears that the variable i in line 2 is allocated, but would be undefined. But that may be my imperative/functional brain making it more complicated than it is...

by chris Frisina at October 29, 2014 11:19 PM


Should we always use `override` in Trait

Should we always use override in Trait to solve preemptively the diamond inheritance issue ?

Let's see an example to explain the point :

trait S { def get : String }
trait A extends S { override def get = "A" }
trait B extends S { override def get = "B" }
class C extends A with B

Without override, the following doesn't compile.

by Yann Moisan at October 29, 2014 11:06 PM

Portland Pattern Repository


New Microsoft System Center Virtual Machine Manager Add-In

Many enterprise-scale AWS customers also have a large collection of virtualized Windows servers on their premises. These customers are now moving all sorts of workloads to the Cloud and have been looking for a unified solution to their on-premises and cloud-based system management needs. Using multiple tools to accomplish the same basic tasks (monitoring, and controlling virtualized servers or instances) is inefficient and adds complexity to the development of solutions that use a combination of on-premises and cloud resources.

New Add-In
In order to allow this important customer base to manage their resources with greater efficiency, we are launching the AWS System Manager for Microsoft System Center Virtual Machine Manager (SCVMM). This add-in allows you to monitor and manage your Amazon Elastic Compute Cloud (EC2) instances (running either Windows or Linux) from within Microsoft System Center Virtual Machine Manager. You can use this add-in to perform common maintenance tasks such as restarting, stopping, and removing instances. You can also connect to the instances using the Remote Desktop Protocol (RDP).

Let's take a quick tour of the add-in! Here's the main screen:

You can select any public AWS Region:

After you launch an EC2 instance running Windows, you can use the add-in to retrieve, decrypt, and display the administrator password:

You can select multiple instances and operate on them as a group:

Available Now
The add-in is available for download today at no charge. After you download and install it, you simply enter your IAM credentials. The credentials will be associated with the logged-in Windows user on the host system so you'll have to enter them just once.

As is the case with every AWS product, we would be thrilled to get your feedback (feature suggestions, bug reports, and anything else that comes to mind). Send it to

-- Jeff;

by Jeff Barr ( at October 29, 2014 10:45 PM


Dividing code into functions. Is it good approach? [migrated]

I have been writing swift code for a while. And I have read a lot of tutorials about how functional paradigms can be applied to swift. Here is currently what I do. I defined a simple operator. Looks like this.

infix operator >>> {associativity left}
func >>><A,B>(a: A?, f: A -> B?) -> B? {
    if let a = a {
        return f(a)
    } else {
        return .None

Now for instance I am gonna populate a UITableViewCell instance from Core Data entity. Here is my code looks like:

func tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath) -> UITableViewCell {

    func getWalks(dog: Dog) -> NSOrderedSet? {
        return dog.walks

    func getWalkAtIndex(walks: NSOrderedSet) -> Walk? {
        return walks[indexPath.row] as? Walk

    let cell = tableView.dequeueReusableCellWithIdentifier("Cell", forIndexPath: indexPath) as UITableViewCell

    func setUpCell(walk: Walk) {
        let dateFormatter = NSDateFormatter()
        dateFormatter.dateStyle = .ShortStyle
        dateFormatter.timeStyle = .MediumStyle

        cell.textLabel.text = dateFormatter.stringFromDate(

    self.currentDog >>> getWalks >>> getWalkAtIndex >>> setUpCell

    return cell

I am creating a function for every step. Though right now I am not good at separating steps but I kind of like this style of coding. What is your opinion about this?

by mstysf at October 29, 2014 10:42 PM

batch insert in scalikejdbc is slow on remote computer

I am trying to insert to a table in bulk of 100 ( i heard it's the best size to use with mySQL), i use scala 2.10.4 with sbt 0.13.6 and the jdbc framework i am using is scalikejdbc with Hikaricp , my connection settings look like this:

val dataSource: DataSource = {
  val ds = new HikariDataSource()
  ds.addDataSourceProperty("url", "jdbc:mysql://" + org.Server.GlobalSettings.DB.mySQLIP + ":3306?rewriteBatchedStatements=true")
  ds.addDataSourceProperty("user", "someUser")
  ds.addDataSourceProperty("password", "not my password")

ConnectionPool.add('review, new DataSourceConnectionPool(dataSource))

The insert code:

try {
  implicit val session = AutoSession
  val paramList: scala.collection.mutable.ListBuffer[Seq[(Symbol, Any)]] = scala.collection.mutable.ListBuffer[Seq[(Symbol, Any)]]()
  paramList += Seq[(Symbol, Any)](
            'review_id -> rev.review_idx,
            'text -> rev.text,
            'category_id -> rev.category_id,
            'aspect_id -> aspectId,
            'not_aspect -> noAspect /*0*/ ,
            'certainty_aspect -> rev.certainty_aspect,
            'sentiment -> rev.sentiment,
            'sentiment_grade -> rev.certainty_sentiment,
            'stars -> rev.stars
  try {
    if (paramList != null && paramList.length > 0) {
        val result = NamedDB('review) localTx { implicit session =>
        sql"""INSERT INTO `MasterFlow`.`classifier_results`
              ( {review_id}, {text}, {category_id}, {aspect_id},
              {not_aspect}, {certainty_aspect}, {sentiment}, {sentiment_grade}, {stars})
          .batchByName(paramList.toIndexedSeq: _*)/*.__resultOfEnsuring*/

Each time i insert a batch it took 15 seconds, my logs:

29/10/2014 14:03:36 - DEBUG[Hikari Housekeeping Timer (pool HikariPool-0)] HikariPool - Before cleanup pool stats HikariPool-0 (total=10, inUse=1, avail=9, waiting=0)
29/10/2014 14:03:36 - DEBUG[Hikari Housekeeping Timer (pool HikariPool-0)] HikariPool - After cleanup pool stats HikariPool-0 (total=10, inUse=1, avail=9, waiting=0)
29/10/2014 14:03:46 - DEBUG[] StatementExecutor$$anon$1 - SQL execution completed

  [SQL Execution]
   INSERT INTO `MasterFlow`.`classifier_results` ( `review_id`, `text`, `category_id`, `aspect_id`, `not_aspect`, `certainty_aspect`, `sentiment`, `sentiment_grade`, `stars`) VALUES ( ...can't show this....);
   INSERT INTO `MasterFlow`.`classifier_results` ( `review_id`, `text`, `category_id`, `aspect_id`, `not_aspect`, `certainty_aspect`, `sentiment`, `sentiment_grade`, `stars`) VALUES ( ...can't show this....);
   INSERT INTO `MasterFlow`.`classifier_results` ( `review_id`, `text`, `category_id`, `aspect_id`, `not_aspect`, `certainty_aspect`, `sentiment`, `sentiment_grade`, `stars`) VALUES ( ...can't show this....);
   ... (total: 100 times); (15466 ms)

  [Stack Trace]

When i run it on the server that host the mySQL database it's run fast, what can i do to make it run faster on a remote computer ?

by user1120007 at October 29, 2014 10:33 PM

Using Ansible Clouldformation module to provision a CoreOS cluster

I am new to Ansible and I am not sure what my host file look like if I want to provision the cluster from my local. My yaml file is as follows:

- hosts: coreos
    - name: Automation CoreOS Cluster
      action: cloudformation >
        stack_name="automation_ansible_coreos_cluster" state=present
        region=us-east-1 disable_rollback=true
          InstanceType: m1.small
          ClusterSize: 3
          DiscoveryURL: '<val>'
          KeyPair: Automation
        Stack: ansible-cloudformation-coreos

Any advice will be appreciated.

by smc1111 at October 29, 2014 10:26 PM

Too many TIME_WAIT connections, getting "Cannot assign requested address"

I have a small web application which opens a TCP socket connection, issues a command, reads the response and then closes the connection for every request to a particular REST endpoint.

I've started load testing the endpoint using Apache JMeter and am noticing that after running for some time, I start seeing errors like "Cannot assign requested address", the code opening this connection is:

def lookup(word: String): Option[String] = {
 try {
  val socket = new Socket(InetAddress.getByName("localhost"), 2222)
  val out = new PrintStream(socket.getOutputStream)
  val reader = new BufferedReader(new InputStreamReader(socket.getInputStream, "utf8"))
  out.println("lookup " + word)

  var curr = reader.readLine()
  var response = ""
  while (!curr.contains("SUCC") && !curr.contains("FAIL")) {
    response += curr + "\n"
    curr = reader.readLine()
  curr match {
    case code if code.contains(SUCCESS_CODE) => {
    case _ => None
 catch {
   case e: Exception => println("Got an exception "+ e.getMessage); None

When I run netstat I also see a lot of the following TIME_WAIT connection statues, which implies to me that I am running out of ports in the ephemeral space.

tcp6       0      0 localhost:54646         localhost:2222          TIME_WAIT  
tcp6       0      0 localhost:54638         localhost:2222          TIME_WAIT  
tcp6       0      0 localhost:54790         localhost:2222          TIME_WAIT  
tcp6       0      0 localhost:54882         localhost:2222          TIME_WAIT 

I am wondering what the best solution is for this issue. My current thought is that creating a connection pool where connections to this service running on port 2222 can be reused by different HTTP requests, rather than creating new requests each time. Is this a sensible way of fixing the issue and making the application scale better? It seems like a lot of overhead to introduce and definitely makes my application more complicated.

Are there any other solutions for helping this application scale and overcome this port issue that I'm not seeing? My web application is running in an Ubuntu linux VM.

by jcm at October 29, 2014 10:20 PM

How do you shutdown a ZMQ QueueDevice from a worker thread in c#

First time using ZMQ and I'm trying to setup a process to handle many getimage requests. What happens when I'm debugging is several exceptions that I'm trying to fix and implement a way to stop the QueueDevice terminate all the thread and exit gracefully.

  1. receiver.Connect(BackendBindAddress); throws An unhandled exception of type 'NetMQ.InvalidException' occurred in NetMQ.dll with the error code NetMQ.zmq.ErrorCode.EINVAL. Why doesn't this exception stop further execution of the thread?
  2. I've tried making QueueDevice a static field and using queueDevice.stop() in the shutdown message function but then the threads start throwing TerminatingExceptions and neverexit. So can I shut down all the threads and the main thread?

Test driving the code

    public void ProgramStartupShutdownTest()
        var mockClientWrapper = new Mock<IClient>(MockBehavior.Strict);

        var target = new SocketListener(2, mockClientWrapper.Object);

        var task = Task.Factory.StartNew(() => target.StartListening("tcp://localhost:81"));
        using (var client = NetMQContext.Create())
           var socket = client.CreateRequestSocket();
           var m = new NetMQMessage(new ShutdownMessage().CreateMessageFrames());

        var timedout = task.Wait(200);

Code I'm working with

private const string BackendBindAddress = "inproc://workers";
public SocketListener(int numberOfWorkers, IClient client )
        numberOfThreads = numberOfWorkers;
        _client = client;

public void StartListening(string address)
        StartZeroMQ(address, context =>
            for (var i = 0; i <= numberOfThreads; i++)
                var t = new Thread(WorkerRoutine);
                        new WorkerParamters
                            Context = context,
                            Client = _client

    private void StartZeroMQ(string address, Action<NetMQContext> setupWorkers)
        using (var context = NetMQContext.Create())
            var queueDevice = new QueueDevice(context, address, BackendBindAddress, DeviceMode.Blocking);

    struct WorkerParamters
        public NetMQContext Context;
        public IClient Client;

    private static void WorkerRoutine(object startparameter)
            var wp = (WorkerParamters) startparameter;
            var client = wp.Client;
            using (var receiver = wp.Context.CreateResponseSocket())
                var running = true;
                while (running)
                        var message = receiver.ReceiveMessage();
                        var letter = Message.ParseMessageFrame(message,
                                                               imageMessage => GetImage(imageMessage, client),
                                                               videoMessage => GetVideo(videoMessage, client),
                                                               shutdownMessage =>
                                                                   running = false;
                                                                   return true;

                        receiver.Send(letter.ToJson(), Encoding.Unicode);

by TealFawn at October 29, 2014 10:12 PM



Lamer News


compressing a set of binary strings with fixed length

I'm looking for a data structure / algorithm to store an unordered set S of binary strings of a fixed length n (i.e. all matching the following regular expression: [01]{n}).

Insertion and lookup ("Is element x in the S?") should maximally take polynomial time to |S|.

The space complexity should be logarithmic to |S| for large sets. In other words, the space should not be exponential to n if for example 2^n / 2 random and unique strings are inserted, but polynomial to n.

Is such a thing known to exist?

by Dobi at October 29, 2014 09:54 PM


Brilliant post by esr on dealing with aging hackers


I'm not as old as esr (I'm 43) but I find this all the time. And it is so frutsrating. Mostly it is frustrating because it's complete communication failure and the person you're trying to communicate with is often so full of arrogance.

There was a reddit thread here a while ago about my GNU doc thing (GNU doc viewer in JS) in which some idiot commented out "hipster" the technologies were. It's a sort of badge of pride among some people to be ignorant of modern tech stacks. As if the world could not possibly get better after Pascal or something.

It's ridiculous and annoying and it makes me sick.

submitted by nicferrier
[link] [24 comments]

October 29, 2014 09:37 PM



found : spray.routing.Directive0 (which expands to) spray.routing.Directive[shapeless.HNil] required: spray.routing.Directive[shapeless.HList]

I need help. I am trying to use CURL to do HTTP POST & use spray routing along with parameters

curl -v -o UAT.json "http://*****/pvdtest1/14-JUL-2014?enriched=true" -H  "Content-Type: application/json" -d '{ "text": "Test", "username": "User" }'

My JSON Post is optional, means I can also get request as

curl -v -o UAT.json "http://*****/pvdtest1/14-JUL-2014?enriched=true"

In the routing if I use

 path("pvdtest1" / Segment) { (cobDate) =>
          (parameters('[Boolean] ? false) & post) {
            (enriched) => {
              println(" inside post")
              entity(as[Message]) { message =>
                println(" inside post 1")
                logger.debug("User '{}' has posted '{}'", message.username, message.text)

above code works file

but if I try to make POST optional, it does not work

 path("pvdtest1" / Segment) { (cobDate) =>
          (parameters('[Boolean] ? false) | post) {
            (enriched) => {
              println(" inside post")
              entity(as[Message]) { message =>
                println(" inside post 1")
                logger.debug("User '{}' has posted '{}'", message.username, message.text)

Error:(166, 56) type mismatch;
 found   : spray.routing.Directive0
    (which expands to)  spray.routing.Directive[shapeless.HNil]
 required: spray.routing.Directive[shapeless.HList]
Note: shapeless.HNil <: shapeless.HList, but class Directive is invariant in type L.
You may wish to define L as +L instead. (SLS 4.5)
          (parameters('[Boolean] ? false) | post) {

Can someone please help in resolving the issue?

by Kunal at October 29, 2014 09:33 PM


Comparing coefficients of boolean functions

Let a real polynomial representing a boolean function be $P(x_1,\dots,x_n) = \sum_{a\in\{0,1\}^n}c_ax^a = \sum_{a\in\{0,1\}^n}p(a)\prod_{i\in 1_a}x_i\prod_{j\in \bar{1}_a}(1-x_j)$ where $1_a$ is the set of $i$ such that $a_i=1$ and $\bar{1}_a$ is the complement of $1_a$. How do you represent $c_a$ in terms of $p(a)$?

Is for all $b\in\{0,1\}^n$, $c_{b} = \sum_{a\in\{0,1\}^n}(-1)^{<b,a>}p(a)$?

by Turbo at October 29, 2014 09:10 PM


Analysis of Unbalanced Covered Calls

Hello I am doing an analysis on covered calls with and extra amount of naked calls. Ignore the symbol and current macroeconomic events.

I couldn't find any reference to this strategy (unbalanced is an adjective I chose, referring to the non-equivalent legs), it looks favorable because of the amount of premium that can be collected.

So the covered call -10 ATM calls +1000 shares

and the additional naked calls

-10 ATM calls

This is considered because of margin requirement for the naked calls isn't as much as completely covering them.

For example purposes, 65 strike is used and assume symbol shares were purchased at or near 65

The risk profile of a normal covered call looks like this: The green line represents profit/loss at expiration, the "hockey stick" risk profile you may be familiar with.

Do note: the premium collected here represents a maximum 7% gain on the entire position, which is mainly the shares.

Also note: the normal covered call becomes loss making at underlying price $60.55 (by expiration), this represents at 6.8% decline in the underlying asset. This represents 6.8% of downside protection.

The risk profile of an unbalanced covered call looks like this:

Do note: the premium collected here represents a maximum 10% gain on the entire position, and the symbol would have to decrease or increase by 2 strikes on expiration (63/65 strike) or increase by for you to get the same 7% gain that the normal covered call would have provided in its best scenario.

Also note: the unbalanced covered call becomes loss making at underlying price $58.33 (by expiration), this represents a 10% decline in the underlying asset and 10% of downside protection. This is sort of favorable because all of the time premium will still be collected and more calls can be written for the next option series after expiration, so total account equity will still grow despite losses in the underlying.

Again, the green line represents profit/loss at expiration day.

Given the extra downside protection, and potential need for a stop order if the asset price rises too high, is the added risk of the naked leg justified? Mainly, what other variables should be considered in this analysis, especially related to the theoretically unlimited loss on asset price rise and how fast the delta will increase on the naked leg.

This is similar to the risk profile of a diagonal, except that the underlying is still stock so after expiration, one could write new options at any strike price without changing margin considerations. (in a diagonal, the long leg's strike has to be subtracted from the short legs strike, resulting in potentially massive margin implications)

Thanks for any insight

by CQM at October 29, 2014 09:09 PM


Ansible: can't access dictionary value - got error: 'dict object' has no attribute

- hosts: test
    - name: print phone details
      debug: msg="user {{ item.key }} is {{ }} ({{ item. telephone }})"
      with_dict: users
      alice: "Alice"
      telephone: 123

When I run this playbook, I am getting this error:

One or more undefined variables: 'dict object' has no attribute 'name' 

This one actually works just fine:

debug: msg="user {{ item.key }} is {{ item.value }}"

What am I missing?

by user1692261 at October 29, 2014 08:58 PM


Is it important to equalize the minimum price fluctuation in pairs trading?

For example, suppose we were trading a strategy which buys one Brent contract and sells one Gasoil contract. The minimum price fluctuation for a Brent contract is \$10, and the minimum price fluctuation for Gasoil is \$25, although the contract price for each is on the same order of magnitude. I have heard that the spread should be priced to take this discrepency into consideration, but I'm not sure I see how or why.

by c12345 at October 29, 2014 08:51 PM

Dave Winer


Can't login to .htpasswd protected directory created with ansible

I am trying to use ansible to create .haccess and .htpasswd files

htpasswd: path=/mypath/.htpasswd name=test password=test owner=root group=root mode=0640

But I can't login with the test:test credentials

The value inside the file does seem valid and has correctly used the APR MD5 algorithm


I'm using ansible 1.7.2, the host machine is Centos 6.5

by Jenkz at October 29, 2014 08:35 PM




Market Timing Performance for a single stock

It seems there are models that study the market timing ability of funds. Models such as the Treynor-Mazuy and Merton-Henriksson. One can also study the bull beta and compare it to a bear beta.

My problem here is to analyze a series of trade I have made on a stock and be able to say whether or not I had market timing abilities.

The reason, I think, I can't use the models mentioned is because they don't seem to be "trading-oriented". In my opinion, the act of buying and selling can only bias a comparison between the returns of my position and those of the underlying stock. Nor do I think that looking only at the bottom line is a good indication.

by user1627466 at October 29, 2014 08:27 PM


How to have separate long lived pool of actors for business layer in Spray application?

I'm creating a RESTful API with Spray 1.2.0 and Akka 2.2.3. Scala version is 2.10 (all of these versions can be changed if necessary). In my business layer I have to communicate over SSH connections to a legacy system. These connections take a long time (1 - 3 minutes) to set up and will expire if they are idle for too long. I worked on the SSH code separately on a test app and now I've wrapped it in an actor.

How do I designate a pool of the SSH actors that is separate from the actors Spray uses to handle HTTP requests? I need these actors to be created at startup as opposed to when a request comes in, otherwise the request times out while the connection is being established. Also, how do I control the size of that pool of actors independently of Spray's actors?

by Gangstead at October 29, 2014 08:26 PM


Planet FreeBSD


Gute Nachrichten! Das FBI hat einem 15-jährigen Schüler ...

Gute Nachrichten! Das FBI hat einem 15-jährigen Schüler einen Trojaner untergeschoben, indem sie sich ihm gegenüber als Webseite der Seattle Times ausgegeben haben. Wieso das gute Nachrichten sind? Na hört mal! Hättet ihr das gedacht, dass man einen 15-jährigen Schüler ausgerechnet auf einer Zeitungs-Webseite erwischen kann? In dem Alter liest noch jemand Zeitung! Vielleicht ist doch noch nicht alles verloren.

October 29, 2014 08:01 PM



Where is the proof of universality of Rule110 in Stephen Wolfram's book?

I have Stephen Wolfram's book A New Kind Of Science. And I want to find the proof of the universality of Rule 110. I couldn't find the clue in the contents page since it only shows 12 chapters and no details.

Could some one help me please? Does he show the proof in his book at all? What is the page number where he shows the proof?

I'm reading the paper by Matthew Cook for sure. It's just more pictures help me understand things better. I now know pretty well what is the tag system and the cyclic tag system, but still trying to find a way to visualize the equivalence between the tag system and the turing machine. Other resources about this are welcome, too!

Many thanks!

by user3572889 at October 29, 2014 07:48 PM



Parametricity and projective eliminations for dependent records

It's well-known that in System F, you can encode binary products with the type $$ A \times B \triangleq \forall\alpha.\; (A \to B \to \alpha) \to \alpha $$ You can then define projection functions $\pi_1 : A \times B \to A$ and $\pi_2 : A \times B \to B$.

This isn't so surprising, even though the natural reading of the F type is of a pair with a let-style elimination $\mathsf{let}\;(x,y) = p \;\mathsf{in}\; e$, because the two kinds of pair are interderivable in intuitionistic logic.

Now, in a dependent type theory with impredicative quantification, you can follow the same pattern to encode a dependent record type $\Sigma x:A.\; B[x]$ as $$ \Sigma x:A.\;B[x] \triangleq \forall\alpha.\; (\Pi x:A.\; B[x] \to \alpha) \to \alpha $$ But in this case, there isn't a simple way of defining the projective eliminators $\pi_1 : \Sigma x:A.\;B[x] \to A$ and $\pi_2 : \Pi p:(\Sigma x:A.\;B[x]).\; B[\pi_1\,p]$.

However, if the type theory is parametric, you can use parametricity to show that $\pi_2$ is definable. This appears to be known --- see, for example, this Agda development by Dan Doel in which he derives it without comment --- but I can't seem to find a reference for this fact.

Does anyone know a reference for the fact that parametricity allows defining projective eliminations for dependent types?

EDIT: The closest thing I've found so far is this 2001 paper by Herman Geuvers, Induction is not derivable in second order dependent type theory, in which he proves that you can't do it without parametricity.

by Neel Krishnaswami at October 29, 2014 07:13 PM


Best way to store hourly/daily options data for research purposes

There are quite a few discussions here about storage, but I can't find quite what I'm looking for.

I'm in need to design a database to store (mostly) option data (strikes, premiums bid / ask, etc.). The problem I see with RDBMS is that given big number of strikes tables will be enormously long and, hence, result in slow processing. While I'm reluctant to use MongoDB or similar NoSQL solution, for now it seems a very good alternative (quick, flexible, scalable).

  1. There is no need in tick data, it will be hourly and daily closing prices & whatever other parameters I'd want to add. So, no need for it to be updated frequently and writing speed is not that important.

  2. The main performance requirement is in using it for data mining, stats and research, so it should be as quick as possible (and preferably easy) to pull and aggregate data from it. I.e., think of 10-year backtest which performs ~100 transactions weekly over various types of options or calculating volatility swap over some extended period of time. So the quicker is better.

  3. There is lots of existent historical data which will be transferred into the database, and it will be updated on a daily basis. I'm not sure how much memory exactly it will take, but AFAIK memory should not be a constraint at all.

  4. Support by popular programming languages & packages (C++, Java, Python, R) is very preferable, but would not be a deal breaker.

Any suggestions?

by sashkello at October 29, 2014 07:12 PM


Different results for raw MD5 base64 encoded string between PHP and Clojure (Java) code for some characters

I have a server that does create a hash using the following code:

base64_encode(md5("some value", true))

What I have to do is to produce the same hash value in Clojure (using Java interop). What I did is to create the following Clojure functions:

(defn md5-raw [s]
  (let [algorithm ( "MD5")
    size (* 2 (.getDigestLength algorithm))]
    (.digest algorithm (.getBytes s))))

(defn bytes-to-base64-string [bytes]
  (String. (b64/encode bytes) "UTF-8"))

Then I use that code that way:

(bytes-to-base64-string (md5-raw "some value")

Now, everything works fine with normal strings. However, after processing multiple different examples, I found the the following character is causing issues:

This is the UTF-8 character #8217.

If I run the following PHP code:

base64_encode(md5("’", true))

What is returned is:


If I run the following Clojure code:

(bytes-to-base64-string (md5-raw "’"))

I get the following value:


Why is that? I am suspecting a character encoding issue, but everything appears to be handled as UTF-8 as far as I can see.

by Neoasimov at October 29, 2014 07:07 PM

How can I create an empty java.util.UUID object?

I don't know why I can't find the answer to this, but I need to pass a blank UUID object in one of my functions to represent a lack of UUID. What is the UUID analagous form of

val x: "" 

, which would be an empty string. I'm essentially trying to get an empty UUID. I tried


but received an error, as you need a valid UUID string.

EDIT: I am implementing this in Scala.

by jeffrey at October 29, 2014 07:05 PM



Terminating a scala program?

I have used try catch as part of my mapreduce code. I am reducing my values based on COUNT in the below code. how do i terminate the job using the code in the below

class RepReducer extends Reducer[NullWritable, Text, Text, IntWritable] {
  override def reduce(key: NullWritable, values: Iterable[Text], context: Reducer[NullWritable, Text, Text, IntWritable]#Context): Unit = {
    val count = values.toList.length
    if (count == 0){
      try {
        context.write(new Text("Number of tables with less than 40% coverage"), new IntWritable(count))
      } catch {
        case e: Exception =>
          Console.err.println(" ")
      System.out.println("terminate job") //here i want to terminate if count is not equal to 0

by sri at October 29, 2014 06:53 PM


What is difference between Oracle and SqlPlus? [on hold]

I installed Oracle 10g.

There is also SQLplus in it. For starting the software we have to start with SQLPLUS.

what is difference between them ? or they same or something else ?

by MA Ali at October 29, 2014 06:51 PM


Is there a way to create a Clustered Index in Slick?

I'm using Slick to create an application which stores a bunch of records about songs in an Hsqldb database.

Currently my tables are defined as:

abstract class DBEnum extends Enumeration {

  def enum2StringMapper(enum: Enumeration) = MappedJdbcType.base[enum.Value, String](
    b => b.toString,
    i => enum.withName(i))

class Artist(tag: Tag) extends Table[(Int, String)](tag, "ARTIST") {

  def id = column[Int]("ID", O.PrimaryKey, O.AutoInc)
  def name = column[String]("NAME", O.NotNull)

  def nameIndex = index("NAME_IDX", name, unique = true)

  def * = (id, name)

class Song(tag: Tag) extends Table[(Int, String, Int)](tag, "SONG") {

  def id = column[Int]("ID", O.PrimaryKey, O.AutoInc)
  def name = column[String]("NAME", O.NotNull)
  def artistId = column[Int]("ARTIST_ID")

  def artistFk = foreignKey("ARTIST_FK", artistId, TableQuery[Artist])(

  def idNameIndex = index("ID_NAME_IDX", (id, name), unique = true)

  def * = (id, name, artistId)

object BroadcastType extends DBEnum {

  implicit val BroadcastTypeMapper = enum2StringMapper(BroadcastType)

  type BroadcastType = Value
  val PLAYED = Value("Played")
  val NOW = Value("Now")
  val NEXT = Value("Next")

class Broadcast(tag: Tag) extends Table[(Int, Timestamp, BroadcastType.BroadcastType)](tag, "BROADCAST") {

  def songId = column[Int]("SONG_ID")
  def dateTime = column[Timestamp]("DATE_TIME")
  def broadcastType = column[BroadcastType.BroadcastType]("BROADCAST_TYPE")

  def pk = primaryKey("BROADCAST_PK", (songId, dateTime))

  def songFk = foreignKey("SONG_FK", songId, TableQuery[Song])(

  def * = (songId, dateTime, broadcastType)

I'm still just setting things up so not sure if it's correct but hopefully you get the idea.

Now what I want to do is keep my composite primary key on the Broadcast table but I want to create a clustered index on the timestamp. Most of my queries on that table will be filtered by ranges on the timestamp. Rows will be inserted with an increasing timestamp so there is minimal shuffling of records to maintain the physical order.

Is there any abstraction to create a clustered index in Slick? So far it seems like I'm going to have to fall back to using plain SQL.

by Steiny at October 29, 2014 06:47 PM


CS Test question, I don't know why the answer is as given.

Question: Suppose an operating system allocates time slices in 10 millisecond units and the time required for a context switch is negligible. How many processes can obtain a time slice in one second if half of them use only half of their slice?


I don't understand where this answer is coming from, surely if half of then use 10 milliseconds that's 750 milliseconds (75 x 10) then the other half use half of the 10 milliseconds that's 375 milliseconds (75 * 5)... so in total the time used is 1125 seconds which is great than the 1 second allowed.

What am I not getting?

submitted by Al__Gorithm
[link] [10 comments]

October 29, 2014 06:43 PM

How hard would it be to build an AI opponent for this ANTI-TRON light cycles game idea?

This is a light cycles game idea where crashing is impossible and each player tries to close off regions via trails to get credit for those regions (like Qix and Go).

A region's edges may come from the trails of multiple players. The player who gets credit for the region is the one who closed it off.

Players can move through trails and regions without issue. Getting trapped is impossible. However, once a region is allocated to a player, this allocation is fixed for the duration of the game.

What strategy would you use to play this game?

How hard would it be to build an AI opponent?

P.S. Additional details:

  • A region on the border of the grid without trails surrounding its entire perimeter is not considered closed off yet.

  • In case of ties, a neutral region is created that is not awarded to any player.

submitted by amichail
[link] [7 comments]

October 29, 2014 06:30 PM


How to use activator/sbt with java 8 ?

I installed activator using the minimal version, hence it should have auto-downloaded the newest possible scala/sbt versions.

When launching a given app (i tried "realtime-search"), I get an error, which google tells me is related to java 8 incompatibility (error while loading CharSequence, class file '/Library/Java/JavaVirtualMachines/jdk1.8.0_05.jdk/...CharSequence.class)' is broken.').

Most pages about the subject tell you to downgrade the JVM to 1.7, but Activator includes a number of java8-generators, which tells me that java 8 support should be possible?!

Could please somebody tell me

1) how to use sbt/activator with an update-to-date jvm?

2) how it can be that scala/sbt is by default incompatible with java 8, although it has been out for months and although before release, there was lots of time to prepare for it?

submitted by ib84
[link] [9 comments]

October 29, 2014 06:27 PM


What's an effective offsite backup strategy for a ZFS mirrored pool?

I use a ZFS pool consisting of two mirrored disks. To get offsite backups going I've purchased two more disks.

My initial plan was to create the offsite backup by attaching a third disk to the mirror, waiting for ZFS to resilver, then detach the drive and carry it offsite. This works well enough, but I've been surprised that it appears to perform a full resilver every time a disk is attached (I'd read, and possibly misunderstood, that each attach would result in an incremental or delta resilver). This results in backups taking longer than is acceptable.

My requirements are to have an offsite copy of my zpool and all its snapshots that I can rotate daily. This means the resilvering needs to take at most 24 hours--currently it's close to that, but our plans of growing our pool will push it beyond that timeframe.

How can I keep offsite backups that don't require a full resilvering operation? Should I be using a different filesystem on my backup drives (e.g. exporting an image rather than having them be part of the ZFS pool?)? Should I have the backups in a seperate pool and send new snapshots to it as they are created?

by STW at October 29, 2014 06:22 PM


Why does the CPU profile in Visual vm show the process spends all its time in a promise deref when using clojure's core async to read a kafka stream

I am running a clojure app reading from a kaka stream. I am using the shovel github project to read from a kafka stream. When I profiled my application using visual vm looking for hotspots I noticed that most of the cpu time about 70% is being spent in the function clojure.core$promise$reify__6310.deref.

The shovel api consumer is a thinwrapper on the Kafka consumergroup api. It reads from a kafka topic and publishes out to a core async channel. Should i be concerned that my application latencies would be affected if i continued using this api. Is there any explanation why the reify on the promise is taking this much cpu time.

by Rohit at October 29, 2014 06:18 PM

Scala external DSL parse line return

I am writing a set of DSL using StandardTokenParser in Scala.

I would like to know if there is trick to read multi-line as:

Rule 1 {        //RULE BLOCK
  Add 500       //Multiple statement, needs a line break to separate when parsing
  Multiple 2    //Multiple statement
                //Empty space could be break in higher-level in file parsing
Rule 2 {
  Subtract 100
  Divide 10

where statement are (Action ~ numericLit)

In this case, statement within { } are repeatable as "Statement", however, needs to be separated by [return] Is that a regex parser I can mix in to help?

by Joyfulvillage at October 29, 2014 06:11 PM


Do turing machines assume something infinite at some point?

In a previous question What exactly is an algorithm? , i asked whether having an "algorithm" that returns the value of a function based on an array of precomputed values was an algorithm.

One of the answers that caught my attention was this one:

The factorial example gets into a different model of computation, called non-uniform computation. A Turing Machine is an example of a uniform model of computation: It has a single, finite description, and works for inputs of arbitrarily large size. In other words, there exists a TM that solves the problem for all input sizes.

Now, we could instead consider computation as follows: For each input size, there exists a TM (or some other computational device) that solves the problem. This is a very different question. Notice that a single TM cannot store the factorial of every single integer, since the TM has a finite description. However, we can make a TM (or a program in C) that stores the factorials of all numbers below 1000. Then, we can make a program that stores the factorials of all numbers between 1000 and 10000. And so on.

Doesnt every TM actually assume some way to deal with infinity? I mean, even a TM with a finite description that computers the factorial of any number N through the algorithm

 int fact(int n) 
 int r = 1; 
 for(int i=2;i<=n;i++) 
 r = r*i; 
 return r; 

contains the assumption that a TM has the "hardware" to compare numbers of arbitrary size through the "<=" comparator, and also ADDers to increment i up to an arbitrary number, MOREOVER, the capability of representing numbers of arbitrary size.

Am i missing something? Why is the approach that i presented in my other question less feasible with respect to infinity than this one?

by Devian Dover at October 29, 2014 05:59 PM



Recreating the behavior of Haskell's `replicateM` in Scala

I'm trying to learn how to do monadic code in Scala but I miss Haskell's ability to constrain types to belong to typeclasses declaring the type of a function.

For example, I'm trying to write something like replicateM from Control.Monad in Scala. Without caring about type annotations, this would be something like:

def replicateM(n: Int)(x: M[A]): M[List[A]] = n match {
  case 0 => map(x => List())
  case _ => for {
    head <- x
    tail <- replicateM(n-1)(x)
  } yield head: tail

(I see that this might not be the more efficient implementation, it's just a simple way to write it).

Where I stumble is: how do I properly annotate the types here? What type is M? How do I restrict M only to types that have flatMap defined on them? It feels like I could do this with traits but I'm not sure of how.

by Rafael S. Calsaverini at October 29, 2014 05:53 PM

Accessing Compojure query string

I'm trying to pull a value out of the url query string however I can return what I believe is a map, however when i use the below code, it doesn't process it as expected. Can anyone advise how I access specific values in the returned querystring datastructure?


(defroutes my-routes
  (GET "/" [] (layout (home-view)))
  (GET "/remservice*" {params :query-params} (str (:parameter params))))

by Dale at October 29, 2014 05:53 PM

Scala 2.11 reflection and annotations (Java) with parameters

I have a simple class-level annotation written in Java:

public @interface Collection {
    String name();

used like:

case class Foo(...)

I need to introspect classes in Scala 2.11 to obtain the value of the name parameter. How can I get this info? I'm up to here:

val sym = currentMirror.classSymbol(Class.forName(fullName))
val anno = sym.annotations.head
val annoType = anno.tree.tpe  // I can get
println(anno.tree.children.tail)  // prints List(name = "mytable")

I'm close! I can see my name parameter and its value but this doesn't seem to be accessible like a Map or anything friendly. How can I get the value of my annotation's parameter?

by Greg at October 29, 2014 05:42 PM



For computer engineering student how useful are certifications? [on hold]

I hope this is the right exchange community. If it's not feel free to delete it.

Right now I'm doing HP ATA connected devices:

Before taking it I asked a certain person from my university who just graduated and was pretty smart about HP certifications. While he did say that he believed they were too narrow and specific he said that they could be useful if I happened to be using said equipment and that he heard HP was popular here in Dubai. He also said that our university degree gives a lot of theoretical knowledge but not nearly enough hands on practice which HP certifications provide.

This video also suggests that up certifications help fresh graduates be more useful for their first job.

I was planning to finish connected devices and do 3 more: networking, servers and storage, and cloud.

Recently I've been doing research and found out that certifications are generally looked down upon and even considered useless for someone who is studying Computer Engineering/Science rather than IT. Given that I'm studying computer engineering (junior now) are these certifications going to help me when looking for an internship/job or will my time be better spent working on personal projects and building up a github profile. I want to get useful internship despite my low GPA (2.76) so I realized I'd better develop skills outside of my coursework and build up a portfolio of projects.

Also I have a friend who is studying management information systems who is also taking the HP courses. Could they be useful for him?

by Hauzron at October 29, 2014 05:30 PM


When is a binary relation confluent? [on hold]

My computer science task states that a binary relation R is confluent if confluence formula

where R is a subset of ℕ (including 0).
It is my first task using quantifiers and binary relations, so I have some basic quesions concerning the propositional formula above:

a) Is an empty set R = {} confluent?
b) Are y1 and y2 allowed to be equal? If so, is R = {(0,0)} confluent?
c) Can you provide an example of a binary relation which is not confluent?

Thanks in advance.

by mpk77812 at October 29, 2014 05:22 PM


Can someone show me that conditionals are undeniably necessary in computer science? [duplicate]

I know this question might seem weird, maybe i'm just overthinking, but this is really troubling me because i've been a computer engineer for some time now and conditionals (if statements for instance) always seemed pretty clear and intuitive!

I mean, let's assume that i have a program which starts at

int main(int argc, char** argv) // in C

How the hell am i supposed to know what is inside argv and argc? My old, intuitive answer: if statements. Using if statements i would decide based on the input how to execute my program... I never considered any possibility other than having the input provided under this way, and i, as a programmer, could only mess with what is after the call to the program, with the input already defined. Hence i had no doubt that i needed to use conditionals.

I mean, i have ONLY 1 PROGRAM that needs to execute for ANY input and provide the right answers, hence i can't avoid if statements. It all seemed alright, for years now... until i started to think... Could there be other ways?

I mean, what if i had a different version of my program for each possible input? The Operating System or the hardware could then call a different version according to the input. Then i thought - this would simply delegate the if statements to either the operating system or the hardware respectively.

Then i even thought, what if the user had a different computer for each different input? I know this might seem really stupid, but i'm just trying to understand the "law" behind the need for conditionals in our physical world!

Also, in this last example it could be argued that the conditional would simply have to be executed in the user's brain (when he decides which computer to use based on the intended input).

One situation that i still consider stricly necessary, pertains to when we already have some data in memory in a computer, and in a given moment (which can't be predicted), i decide to apply some algorithm or filter to data.

Assume that a user typed a lot of numbers into a text file. Afterwards, for some reason, he decides to calculate the amount of numbers greater than 0. This situation still seems kind of natural to me.

Can someone give me some light in this subject? This is really troubling me :( maybe i've overthought things too much and now i'm paying the price for it...

Thanks in advance!

by Devian Dover at October 29, 2014 05:13 PM



Send Unauthenticated Request to a Different Action Route

I am implementing a global filter across all requests to identify that the user who is making the request is authenticated, and if not send them to the login screen.

My Global object extending the WithFilters class which is using the Authentication Filter:

object Global extends WithFilters(AuthenticationFilter()) { }

My Authentication Filter:

class AuthenticationFilter  extends Filter {
  def apply(next: (RequestHeader) => Future[Result])(request: RequestHeader): Future[Result] = {
    println("this is the request that will be filtered: " + request)
    if (!authenticated) 
      // How do i send the request to the login Action?

object AuthenticationFilter {
  def apply(actionNames: String*) = new AuthenticationFilter()

My question is how do I go about sending an unauthenticated user to the login Action/route?

by j will at October 29, 2014 05:08 PM

Dave Winer

Broken clipboards

The clipboard in Chrome/Mac is getting worse not better.

Basically there are times when Copy just doesn't work. The way to work around it is to create a new tab, set up the tab so that the text you want to copy is selected, and do it again. It might work.

Repeat until it does work. Sometimes copying stuff to the clipboard, an operation that shouldn't require any conscious effort for an experienced user such as myself, takes minutes.

That's Chrome on the Mac. Now Safari on the iPad, another of my mainstays, can't copy and paste.

This is such a basic important operation for a computer, to be able to copy an idea from one place to insert it into another. This kind of breakage simply is not acceptable. Yet what choice do we have other than to accept it. It may seem like a small annoyance, but it's a very huge change in the way computers work. And not something anyone else is likely to ask about. So I thought I should.

October 29, 2014 05:07 PM


How to evaluate conditional probability on a hyperplane

I'm not sure if the title is exactly correct, but my problem is the following conditional probability $$P(x_1\geq1|x_1+x_2+...+x_n=2, x_i\geq0)$$

In addition, how to evaluate $$P(\forall x_i \geq1|x_1+x_2+...+x_n=2, x_i\geq0)$$

by mathsam at October 29, 2014 05:04 PM



Better way to implement an authentication barrier in a handler?

Context: I am making a JSON API in Clojure, using Compojure + Ring.

I have a few handler functions that require the user to supply their API token before the body of the function gets executed.

Because this same idea is reused in several places (make sure the token is present, get the api user), I wanted to abstract it into its own function. Right now, I'm implementing it as a callback function:

(defn when-authenticated
  "Finds a user with a matching api token and passes it into the succcess
  function. If the user is not found, returns a 403."
  [token success-fn]
  (if-let [api-user (first (find-by-token {:token token}))]
    (success-fn api-user)
    {:status 403 :body "Unauthorized"}))

And how I would use it:

(defn create
  "Creates a new group."
  [token group-data]
  (user/when-authenticated token (fn [api-user]
      (group/validate group-data)
      (group/insert! group-data)
      (status (response group-data) 201)
      (catch Exception e
        (status (response (.getMessage e)) 422))))))

I think that this is okay, but I have a feeling that there is a more clojure-y way to accomplish this. What would be the best method to use here?

by Taylor Lapeyre at October 29, 2014 04:59 PM


Programming problem. Any help would be awesome.

I have to code this is Pseudo-Code

Rose Resale Shop is having a seven-day sale during which the price of any unsold item drops 7 percent each day. For example, an item that costs $12.00 on the first day costs 7 percent less, or $11.16, on the second day. On the third day, the same item is 7 percent less than $11.16, or $10.38. Design an application that allows a user to input a price until an appropriate sentinel value is entered. Output is the price of each item on each day, one through seven.

I have no idea how to have the discount be calculated after each day. I know how to do day one but after that im lost. any advice/help would be awsome.

submitted by SocratesSC
[link] [8 comments]

October 29, 2014 04:58 PM


ScalaForms with bracket notation for nested structures

I have legacy code in php that I'm in the process of porting over to Play! and Scala. The endpoint is posted to via jQuery with form data in this format:


As I understand it from the documentation:

When you are using nested data this way, the form values sent by the browser must be named like address.street,, etc.

So, understandably the following form doesn't work.

case class LogAction(a: String, g: List[String])
case class LogActions(Actions: List[LogAction])

private def logActionsForm = Form(
    "Actions" -> list(
        "a" -> nonEmptyText,
        "g" -> list(nonEmptyText)

It expects data in this format:


Changing the format on the request would "work", but that option would break backwards compatibility on the endpoint so that's not an option.

The questions are:

  1. Is there some simple way to modify the form to allow for the current request format (bracket notation for nested values)?
  2. That failing, what's the simplest way to transform the input prior to feeding it into the form?
  3. Are there any other simpler solutions?

by andrewmh at October 29, 2014 04:44 PM

Planet FreeBSD

Pootle Translation System is now Updated to Version!

If any of you have tried to use the PC-BSD Translation / Pootle web interface in the last year you probably don’t have a lot of good things to say about it.  A 35 word translation might take you a good 30 minutes between the load times (if you could even login to the site without it timing out).  Thankfully those days are behind us!  PC-BSD has upgraded their translation system to use Pootle version and it is blazingly fast.  I went through localizing a small 35 word applet for PC-BSD and it took me roughly 4 minutes compared to what would have taken at least half an hour before due to the slowness of the old pootle software.  Check out the new translation site at

There’s a couple of things you are going to want to keep in mind about the new translation system.  You will have to create a new account.  Upgrading Pootle directly was proving disastrous so we exported all the strings and imported them into the new Pootle server.  What this means is no accounts were transferred since a direct upgrade was not done.  This also means that the strings that were brought in also appear as “fuzzy” translations.  If you have some free time you can help by going to the translation site and approving some of the fuzzy translations.  Many languages have already been done they just need to be reviewed and marked as acceptable (uncheck the fuzzy box if you are 100% certain on the translation).

I hope you guys are as excited as I am over the new translation possibilities!  For more information on how you can help with localization / translating contact me at

Best Regards,



by Josh Smith at October 29, 2014 04:35 PM


How to find out where a umask is getting set for a user?

I've been battling with umask/permission problems for a while now in various cases. I have set www-data (run by nginx/php-fpm) to have a umask of 002 in the /etc/init/php-fpm.conf file, and my deployer user also has umask of 002 in /home/deployer/.bashrc. The application files all have the 0660 permissions (0770 for directories) so that they both can read/write them (the deployer's main group is www-data). However I keep running into cases where this umask is not getting honored, and files are set to 644 or 640.

My current case is when SSHing in as root using an ansible script, with the options of:

sudo: yes
sudo_user: deployer

Files created with ansible are getting file permissions of 644. How do I see where umask is getting set, and where to add the umask?

Secondly is there not a better way to do deployment? I would like to just avoid this issue completely and do all deployment work as the www-data user, but apparently that's a security issue. This umask stuff is really complicated deployment.

Thank you.

by AllTheCode at October 29, 2014 04:24 PM

Matthew Green

Attack of the Week: Unpicking PLAID

A few years ago I came across an amusing Slashdot story: 'Australian Gov't offers $560k Cryptographic Protocol for Free'. The story concerned a protocol developed by Australia's Centrelink, the equivalent of our Health and Human Services department, that was wonderfully named the Protocol for Lightweight Authentication of ID, or (I kid you not), 'PLAID'.

Now to me this headline did not inspire a lot of confidence. I'm a great believer in TANSTAAFL -- in cryptographic protocol design and in life. I figure if someone has to use 'free' to lure you in the door, there's a good chance they're waiting in the other side with a hammer and a bottle of chloroform, or whatever the cryptographic equivalent might be.

A quick look at PLAID didn't disappoint. The designers used ECB like it was going out of style; did unadvisable things with RSA encryption, and that was only the beginning.

What PLAID was not, I thought at the time, was bad to the point of being exploitable. Moreover, I got the idea that nobody would use the thing. It appears I was badly wrong on both counts.

This is apropos of a new paper authored by Degabriele, Fehr, Fischlin, Gagliardoni, Günther, Marson, Mittelbach and Paterson entitled 'Unpicking Plaid: A Cryptographic Analysis of an ISO-standards-track Authentication Protocol'. Not to be cute about it, but this paper shows that PLAID is, well, bad.

As is typical for this kind of post, I'm going to tackle the rest of the content in a 'fun' question and answer format.
What is PLAID?
In researching this blog post I was somewhat amazed to find that Australians not only have universal healthcare, but that they even occasionally require healthcare. That this surprised me is likely due to the fact that my knowledge of Australia mainly comes from the first two Crocodile Dundee movies.

It seems that not only do Australians have healthcare, but they also have access to a 'smart' ID card that allows them to authenticate themselves. These contactless smartcards run the proprietary PLAID protocol, which handles all of the ugly steps in authenticating the bearer, while also providing some very complicated protections to prevent user tracking.

This protocol has been standardized as Australian standard AS-5185-2010 and is currently "in the fast track standardization process for ISO/IEC 25185-1.2". You can obtain your own copy of the standard for a mere 118 Swiss Francs, which my currency converter tells me is entirely too much money. So I'll do my best to provide the essential details in this post -- and many others are in the research paper.
How does PLAID work?
Accompanying tinfoil hat
not pictured.
PLAID is primarily an identification and authentication protocol, but it also attempts to offer its users a strong form of privacy. Specifically, it attempts to ensure that only authorized readers can scan your card and determine who you are.

This is a laudable goal, since contactless smartcards are 'promiscuous', meaning that they can be scanned by anyone with the right equipment. Countless research papers have been written on tracking RFID devices, so the PLAID designers had to think hard about how they were going to prevent this sort of issue.

Before we get to what steps the PLAID designers took, let's talk about what one shouldn't do in building such systems.

Let's imagine you have a smartcard talking to a reader where anyone can query the card. The primary goal of an authentication protocol is to perform some sort of mutual authentication handshake and derive a session key that the card can use to exchange sensitive data. The naive protocol might look like this:

Naive approach to an authentication protocol. The card identifies itself
  before the key agreement protocol is run, so the reader can look up a card-specific key.
The obvious problem with this protocol is that it reveals the card ID number to anyone who asks. Yet such identification appears necessary, since each card will have its own secret key material -- and the reader must know the card's key in order to run an authenticated key agreement protocol.

PLAID solves this problem by storing a set of RSA public keys corresponding to various authorized applications. When the reader says "Hi, I'm a hospital", a PLAID card determines which public key it use to talk to hospitals, then encrypts data under the key and sends it over. Only a legitimate hospital should know the corresponding RSA secret key to decrypt this value.
So what's the problem here?
PLAID's approach would be peachy if there were only one public key to deal with. However PLAID cards can be provisioned to support many applications (called 'capabilities'). For example, a citizen who routinely finds himself wrestling crocodiles might possess a capability unique to the Reptile Wound Treatment unit of a local hospital.* If the card responds to this capability, it potentially leaks information about the bearer.

To solve this problem, PLAID cards do some truly funny stuff.

Specifically, when a reader queries the card, the reader initially transmits a set of capabilities that it will support (e.g., 'hospital', 'bank', 'social security center'). If the PLAID card has been provisioned with a matching public key, it goes ahead and uses it. If no matching key is found, however, the card does not send an error -- since this would reveal user-specific information. Instead, it fakes a response by encrypting junk under a special 'dummy' RSA public key (called a 'shill key') that's stored within the card. And herein lies the problem.

You see, the 'shill key' is unique to each card, which presents a completely new avenue for tracking individual cards. If an attacker can induce an error and subsequently fingerprint the resulting RSA ciphertext -- that is, figure out which shill key was used to encipher it -- they can potentially identify your card the next time they encounter you.

A portion of the PLAID protocol (source). The reader (IFD) is on the left, the card (ICC) is on the right.
Thus the problem of tracking PLAID cards devolves to one of matching RSA ciphertexts to unknown public keys. The PLAID designers assumed this would not be possible. What Degabriele et al. show is that they were wrong.
What do German Tanks have to do with RSA encryption?

To distinguish the RSA moduli of two different cards, the researchers employed of an old solution to a problem called the German Tank Problem. As the name implies, this is a real statistical problem that the allies ran up against during WWII.

The problem can be described as follows:

Imagine that a factory is producing tanks, where each tank is printed with a sequential serial number in the ordered sequence 1, 2, ..., N. Through battlefield captures you then obtain a small and (presumably) random subset of k tanks. From the recovered serial numbers, your job is to estimate N, the total number of tanks produced by the factory.

Happily, this is the rare statistical problem with a beautifully simple solution.** Let m be the maximum serial number in the set of k tanks you've observed. To obtain a rough guess Ñ for the total number of tanks produced, you can simply compute the following formula:

So what's this got to do with guessing an unknown RSA key?

Well, this turns out to be another instance of exactly the same problem. Imagine that I can repeatedly query your card with bogus 'capabilities' and thus cause it to enter its error state. To foil tracking, the card will send me a random RSA ciphertext encrypted with its (card-specific) "shill key". I don't know the public modulus corresponding to your key, but I do know that the ciphertext was computed using the standard RSA encryption formula m^e mod N.

My goal, as it was with the tanks, is to make a guess for N.

It's worth pointing out that RSA is a permutation, which means that, provided the plaintext messages are randomly distributed, the ciphertexts will be randomly distributed as well. So all I need to do is collect a number of ciphertexts and apply the equation above. The resulting guess Ñ should then serve as a (fuzzy) identifier for any given card.

Results for identifying a given card in a batch of (up to) 100 cards. Each card was initially 'fingerprinted' by collecting k1=1000 RSA ciphertexts. Arbitrary cards were later identified by collecting varying number of ciphertexts (k2).
Now obviously this isn't the whole story -- the value Ñ isn't exact, and you'll get different levels of error depending on how many ciphertexts you get, and how many cards you have to distinguish amongst. But as the chart above shows, it's possible to identify a specific card within in a real system (containing 100 cards) with reasonable probability.
But that's not realistic at all. What if there are other cards in play?
It's important to note that real life is nothing like a laboratory experiment. The experiment above considered a 'universe' consisting of only 100 cards, required an initial 'training period' of 1000 scans for a given card, and subsequent recognition demanded anywhere from 50 to 1000 scans of the card.

Since the authentication time for the cards is about 300 milliseconds per scan, this means that even the minimal number (50) of scans still requires about 15 seconds, and only produces the correct result with about 12% probability. This probability goes up dramatically the more scans you get to make.

Nonetheless, even the 50-scan result is significantly better than guessing, and with more concerted scans can be greatly improved. What this attack primarily serves to prove is that homebrew solutions are your enemy. Cooking up a clever approach to foiling tracking might seem like the right way to tackle a problem, but sometimes it can make you even more vulnerable than you were before.
How should one do this right?
The basic property that the PLAID designers were trying to achieve with this protocol is something called key privacy. Roughly speaking, a key private cryptosystem ensures that an attacker cannot link a given ciphertext (or collection of ciphertexts) to the public key that created them -- even if the attacker knows the public key itself.

There are many excellent cryptosystems that provide strong key privacy. Ironically, most are more efficient than RSA; one solution to the PLAID problem is simply to switch to one of these. For example, many elliptic-curve (Diffie-Hellman or Elgamal) solutions will generally provide strong key privacy, provided that all public keys in the system are set in the same group.

A smartcard encryption scheme based on, say, Curve25519 would likely avoid most of the problems presented above, and might also be significantly faster to boot.
What else should I know about PLAID?
There are a few other flaws in the PLAID protocol that make the paper worthy of a careful read -- if only to scare you away from designing your own protocols.

In addition to the shill key fingerprinting attacks, the authors also show how to identify the set of capabilities that a card supports, using a relatively efficient binary search approach. Beyond this, there are also many significantly more mundane flaws due to improper use of cryptography. 

By far the ugliest of these are the protocol's lack of proper RSA-OAEP padding, which may present potential (but implementation specific) vulnerabilities to Bleichenbacher's attack. There's also some almost de rigueur misuse of CBC mode encryption, and a handful of other flaws that should serve as audit flags if you ever run into them in the wild.

At the end of the day, bugs in protocols like PLAID mainly help to illustrate the difficulty of designing secure protocols from scratch. They also keep cryptographers busy with publications, so perhaps I shouldn't complain too much.
You should probably read up a bit on Australia before you post again.
I would note that this really isn't a question. But it's absolutely true. And to this end: I would sincerely like to apologize to any Australian citizens who may have been offended by this post.


* This capability is not explicitly described in the PLAID specification.

** The solution presented here -- and used in the paper -- is the frequentist approach to the problem. There is also a Bayesian approach, but it isn't required for this application.

by Matthew Green ( at October 29, 2014 04:17 PM

Portland Pattern Repository



Missing *out* in Clojure with Lein and Ring

I am running Lein 2 and cider 0.7.0. I made a sample ring app that uses ring/run-jetty to start.

(ns nimbus-admin.handler
  (:require [compojure.core :refer :all]
            [compojure.handler :as handler]
            [ :as nrepl-server]
            [cider.nrepl :refer (cider-nrepl-handler)]
            [ring.adapter.jetty :as ring]
            [ :refer [trace]]
            [ring.util.response :refer [resource-response response redirect content-type]]
            [compojure.route :as route])

(defroutes app-routes 
  (GET "/blah" req "blah")
  (route/resources "/")
  (route/not-found (trace "not-found" "Not Found")))

(def app (handler/site app-routes))

(defn start-nrepl-server []
  (nrepl-server/start-server :port 7888 :handler cider-nrepl-handler))

(defn start-jetty [ip port]
  (ring/run-jetty app {:port port :ip ip}))

(defn -main
  ([] (-main 8080 ""))
  ([port ip & args] 
     (let [port (Integer. port)]
       (start-jetty ip port))))

then connect to it with cider like:

cider-connect 7888

I can navigate to my site and eval forms in emacs and it will update what is running live in my nrepl session, so that is great.

I cannot see output, either with (print "test") (println "test") (trace "out" 1)

Finally, my project file:

(defproject nimbus-admin "0.1.0"
  :description ""
  :url ""
  :min-lein-version "2.0.0"
  :dependencies [[org.clojure/clojure "1.6.0"]
                 [com.climate/clj-newrelic "0.1.1"]
                 [com.ashafa/clutch "0.4.0-RC1"]
                 [ring "1.3.1"]
                 [clj-time "0.8.0"]
                 [midje "1.6.3"]
                 [org.clojure/tools.nrepl "0.2.6"]
                 [ring/ring-json "0.3.1"]
                 [org.clojure/tools.trace "0.7.8"]
                 [compojure "1.1.9"]
                 [org.clojure/data.json "0.2.5"]
                 [org.clojure/core.async "0.1.346.0-17112a-alpha"]
  :plugins [[lein-environ "1.0.0"]
            [cider/cider-nrepl "0.7.0"]]
  :main nimbus-admin.handler)

I start the site with lein run

Edit I CAN see output, ONLY when using (.println System/out msg)

by Steve at October 29, 2014 03:56 PM

Configuring Scala Dispatch to use HTTP/1.0

I'm using Dispatch v0.11.0. I want to use it to send HTTP/1.0 requests.

I can see that it is possible to configure a few things:

val dispatcher = Http.configure(_.setRequestTimeoutInMs(timeoutMs))

However I cannot find a way to configure the object to user HTTP/1.0, is it possible?

by Pooky at October 29, 2014 03:43 PM

How to force Typesafe Activator to listen

I recently installed Typesafe Activator to a VM. Applications created by activator can be accessed after port forwarding, but Activator seems to listen localhost. How to change this to WAN?

by interlude at October 29, 2014 03:39 PM

Lazy eager map evaluation

There are basically two options to evaluate a map in Scala.

  • Lazy evaluation computers the function that is passed as a parameter when the next value is needed. IF the function takes one hour to execute then it's one hour to wait when the value is needed. (e.g. Stream and Iterator)
  • Eager evaluation computes the function when the map is defined. It produces a new list (Vector or whatever) and stores the results, making the program to be busy in that time.
  • With Future we can obtain the list (Seq or whatever) in a separate thread, this means that our thread doesn't block, but the results have to be stored.

So I did something different, please check it here.

This was a while ago so I don't remember whether I tested it. The point is to have a map that applies concurrently (non-blocking) and kind of eagerly to a set of elements, filling a buffer (the size of the number of cores in the computer, and not more). This means that:

  1. The invocation of the map doesn't block the current thread.
  2. Obtaining an element doesn't block the current thread (in case there was time to calculate it before and store the result in the buffer).
  3. Infinite lists can be handled because we only prefetch a few results (about 8, depending on the number of cores).

So this all sounds very good and you may be wondering what's the problem. The problem is that this solution is not particularly elegant IMHO. Assuming the code I shared works in Java and or Scala, to iterate over the elements in the iterable produced by the map I would only need to write:

new CFMap(whateverFunction).apply(whateverIterable)

However what I would like to write is something like:


As it is usual in Scala (the 'b' is for buffered), or perhaps something like:

Either of them works for me. So the question is, how can I do this in an idiomatic way in Scala? Some options:

  • Monads: create a new collection "Buffered" so that I can use the toBuffered method (that should be added to the previous ones as an implicit) and implement map and everything else for this Buffered thing (sounds like quite some work).
  • Implicits: create an implicit method that transforms the usual collections or the superclass of them (I'm not sure about which one should it be, Iterable maybe?) to something else to which I can apply the .bmap method and obtain something from it, probably an iterable.
  • Other: there are probably many options I have not considered so far. It's possible that some library does already implement this (I'd be actually surprised of the opposite, I can't believe nobody thought of this before). Using something that has already been done is usually a good idea.

Please let me know if something is unclear.

by Trylks at October 29, 2014 03:37 PM



Redis-client inside Akka actor does not work with higher concurrency

I am using debasishg/scala-redis as the redis client from inside an Akka actor. Inside the actor's receive method, I use Future to wrap the call to redis as

case msg => 
      val f = Future {
f onComplete{
    case Success(list) => println(list(0))
    case Failure(error) =>

While using ab -n 50 -c 1 http://url it works fine.

However, if I increase the concurrency to say 2 like ab -n 50 -c 2 http://url, I start getting error,

java.lang.Exception: Protocol error: Got ($,[B@48497d26) as initial reply byte.

Initially, I just had the call without the future even with high concurrency and it worked fine. However, after wrapping it within Future, it stopped working.

Any help in fixing the issue will be appreciated.

by mohit at October 29, 2014 03:30 PM

I made some customization in Zoho CRM module and now I want to reuse these customization in another Zoho CRM

I made some customization in my demo Zoho CRM module and now I want to reuse these customization in my original Zoho CRM.

Can I export or publish these customization from my demo Zoho CRM and import in my original Zoho CRM?

If yes then guide me to export & import these customization in between to different Zoho CRM accounts.


by Hashim at October 29, 2014 03:30 PM

Dave Winer

20 years later, platforms are still Chinese households

On October 29, 1994 I wrote an emotional piece about being a developer for Apple.

I had just read an Amy Tan novel about life in China many years ago, and found a lot in common with the life of Mac developers in 1994. Developers cook the meals, care for the babies, and don't ask for much in return. Apple was in trouble, my theory went, because the developers weren't getting enough love.

"A platform is a Chinese household. One rich husband. Lots of wives. If the husband abuses one wife, it hurts all the wives. All of sudden food starts getting cold. The bed is empty. All of a sudden husband isn't so rich."

Today Apple has recovered. The platform isn't exactly a happy household, but it's better than it was then. Twitter has the problem Apple had then.

The more things change, the more they stay the same, someone once said.


Here's an excellent example of the mistake many platform vendors make.

IBM is not a developer that will help Twitter overcome the obstacles in its way. The devs that matter are people, not huge companies.

Apple made that mistake too, parading out partnerships with Borland and IBM, and overlooking the developers, including Microsoft, btw -- who were cooking the meals.

October 29, 2014 03:30 PM


Is it possible to obtain a total function by composition of partial functions?

This statement is Theorem 1.1 (page 39) of Computability, Complexity and languages by Martin Davis:

If function $h$ is obtained from the (partially) computable functions $f$, $g_1$, $g_2$, ..., $g_k$ by composition then $h$ is (partially) computable.

What I understand from this theorem is if there is at least one partial function among the composed function then $h$ is partial function. Am I right?

Here is an example, the functions $x$ and $x+y$ and $x.y$ are total but $x-y$ is a partial function. it is possible to obtain $4x^2-2x$ by composition of these functions in the following way, this function is total but it is obtained from an non-total function ($x-y$).

$2x = x+x$

$4x^2 = (2x)(2x)$

$4x^2-2x$          composition of $x-y$

This is an example from the book, is this example a contradiction of the theorem?

by Mohammad at October 29, 2014 03:24 PM

Is the given functional dependency a partial dependency or transitive dependency or both?

Relation - R(A,B,C,D)

Functional dependencies

  1. AB->C
  2. BC->D

Is BC->D a partial dependency or transitive dependency or both?

by Atul Gangwar at October 29, 2014 03:20 PM


vim-fireplace and chestnut (now working!)

A previous post asking how to get vim-fireplace to work with a browser-repl from chestnut just got deleted. Here is my solution, in case anybody else has been wondering.

Ok I finally got this to work using the advice from /u/joshuadavey.

My precise steps:

  1. Open vim
  2. Back in the terminal, run lein repl
  3. In the repl, do (run)
  4. In vim, open the cljs file and do

    :Piggieback (weasel.repl.websocket/repl-env :ip "" :port 9001) 
  5. Open browser to http://localhost:10555/

  6. In vim, cqc (js/alert "woohoo!")

submitted by inyourtenement
[link] [4 comments]

October 29, 2014 03:19 PM


How to get key value pairs from such database?

So, I have a database file in Text format which has the following format. You have an actor name followed by the list of movies he/she has worked. Second and first names are separated by commas, and you can have more than one films against one actor. How do I create key value pairs, where key is actor name and value is the movies he/she has worked, for such format in either Scala or Python for Spark.

'Malid, Paddy               In the Land of the Head Hunters (1914)  [Kenada]  <5>

'Mixerman' Sarafin, Eric    "Pensado's Place" (2011) {Eric "Miyerman" Sarafin (#2.6)} 

'Monkey' Stevens, Neal      Grey Britain (2009)  [Pestilence]
                            Half Hearted (2010)  (as Monkey)  [Tattooist]  <15>
                            Sign Language (2010/II)  (as Neal Stevens)  [Steve]  <6>
                            The Leap (2014/I)  [Lead Smuggler]

by MetallicPriest at October 29, 2014 03:19 PM