Planet Primates

January 30, 2015


Real degree versus Finite field

Let $f$ be a Boolean function.

Let $g$ be the minimum degree real polynomial that represents $f$ with degree $d$.

Let $g_{p}$ be the minimum degree $\Bbb F_p$ polynomial that represents $f$ with degree $d_p$.

Is $d_p\leq d$?

If $gcd(p_1,p_2)=1$, is there a relation between $d_{p_i}$s?

What is a good reference to understand relations among these degrees?

Would it be reasonable to query about approximate degrees over $\Bbb F_p$?

by Turbo at January 30, 2015 07:08 PM


How to sum values and group them by a key value in Scala's List of Map?

I have a List of Map:

val list = List(
  Map("id" -> "A", "value" -> 20, "name" -> "a"),
  Map("id" -> "B", "value" -> 10, "name" -> "b"),
  Map("id" -> "A", "value" -> 5, "name" -> "a"),
  Map("id" -> "C", "value" -> 1, "name" -> "c"),
  Map("id" -> "D", "value" -> 60, "name" -> "d"),
  Map("id" -> "C", "value" -> 3, "name" -> "c")

I want to sum the value and group them by id value in the most efficient way so it becomes:

Map(A -> 25, B -> 10, C -> 4, D -> 60)

by suud at January 30, 2015 07:06 PM




Slick lifted embedding CHECK constraints?

I'm using Slick 3.0.0-M1's lifted embedding to define some tables in a database. I'd like to add some properties, including CHECK constraints and TRIGGERs, to my schema. Slick doesn't seem to support TRIGGERS or CHECK constraints, so I get the sense that I'll have to add them myself using hand-written SQL.

My question is as follows: How do I add a line of SQL (as a string) to a Slick Table so that when table.schema.create is executed, those lines will be added to the schema?


by Hawk Weisman at January 30, 2015 06:53 PM

difference between action and action.async

I wrote 2 actions to test the difference between action and action.asyc. However, I've found that both of the 2 methods return value after Thread.sleep complete. Should action.asyc return the value immediately as per the description?

def intensiveComputation(): Int = {
    return 1

def testPromise() = Action {
   Ok("sync" + intensiveComputation())

def testPromise = Action.async {
   val futureint = scala.concurrent.Future { intensiveComputation() } => Ok("async" + i))

by vaj oja at January 30, 2015 06:47 PM

Case classes, persistence and Play forms

Over the course of creating a basic application using Play and Anorm I've encountered a problem when dealing with entities not yet saved to database. The form obviously doesn't have a field for ID so I can't create a mapping using the case class apply method. I ended up creating two classes - one for persisted entities and one for not yet persisted and the code looks something like this

case class EphemeralUser(email: String)

case class PersistentUser(id: Long, email: String)

val userForm = Form(mapping("email" -> text))(EphemeralUser.apply)(EphemeralUser.unapply)

def create(user: EphemeralUser): PersistentUser = { /* Save with Anorm */ }

Is there a more elegant way to deal with it using a single case class User(id: Option[Long], email: String) ? Or even better, some way to remove code repetition cause I kinda like the fact that persisted and ephemeral users are different types.

by synapse at January 30, 2015 06:35 PM


Traveling Salesman Problem Question (Xpost r/sysor)

I have a quick question that I'd appreciate your expertise in. Long story short, one of the main stipulations of every TSP and VRP is that each node must be visited once. Is there not a well known algorithm for cases where this is not true?

For example if I am a salesman with only 5 items to sell, and there are 100 people at different locations who will all pay different amounts, how do I choose my 5 node (plus origin) route?

To me it seems like the problem should be much easier, but I am having a hard time finding literature on it. And for the record I'm not having trouble with the optimization as much as building a heuristic.

Thanks in advance!

submitted by beatlesfan18
[link] [comment]

January 30, 2015 06:30 PM


NoSuchElementException: None.get in play framework for scala

I need to build update method, but when i test show the error NoSuchElementException: None.get


   object UserController extends Controller {

   def update(id:Long) = DBAction {  implicit rs =>

   var user = simpleUserForm.bindFromRequest.get = Users.toOption(id)

   val simpleUserForm :Form[User] = Form {
       "firstName" -> nonEmptyText,
       "lastName" -> nonEmptyText,
       "email" -> email,
       "birthDate" -> nonEmptyText,
       "phone" -> nonEmptyText,
       "username" -> text,
       "password" -> nonEmptyText



@import models.auth.Users
@(title: String, user:models.auth.User)


<form method="post" action="@controllers.auth.routes.UserController.update(Users.toLong(">
    <input type="text" placeholder="First Name" name="firstName" value="@user.firstName"/><br/>
    <input type="text" placeholder="Last Name" name="lastName" value="@user.lastName"/><br/>
    <input type="email" placeholder="Email" name="email" value="" /><br/>
    <input type="text" placeholder="Phone" name="phone" value="" /><br/>
    <input type="text" placeholder="Birthdate(dd/MM/yyyy)" name="birthDate" value="@user.birthDate" /><br/>
    <input type="text" placeholder="Username" name="username" value="@user.username" /><br/>

    <input type="submit" value="Update User" />


POST        /user/:id/         controllers.auth.UserController.update(id:Long)

I already done for create, read and delete, but for update i found error in line
var user = simpleUserForm.bindFromRequest.get

the error is NoSuchElementException: None.get

by Igor Ronner at January 30, 2015 06:29 PM

Adding Play Scala as dependency in sbt error

I'm trying to add play scala as a dependency in my build.sbt file. Here is my build.sbt file:

name := "name"

version := "0.0" 

lazy val root = (project in file(".")).enablePlugins(play.PlayScala)

scalaVersion := "2.11.2" 

resolvers += Resolver.mavenLocal

organization := "com.suredbits.core"

libraryDependencies ++= {  
    val sprayV = "1.3.2"
    val akkaV = "2.3.8" 
      "org.scalatest" % "scalatest_2.11" % "2.2.0",
      "io.spray"            %%  "spray-can"     % sprayV withSources() withJavadoc(),
    "io.spray"            %%  "spray-routing" % sprayV withSources() withJavadoc(),
    "io.spray"            %%  "spray-testkit" % sprayV  % "test" withSources() withJavadoc(),
    "com.typesafe.akka"   %%  "akka-actor"    % akkaV withSources() withJavadoc(),
    "com.typesafe.akka"   %%  "akka-testkit"  % akkaV   % "test" withSources() withJavadoc(),
    "org.specs2"          %%  "specs2-core"   % "2.4.7-scalaz-7.0.6" % "test" withSources() withJavadoc(),
      "org.scalactic"               %%  "scalactic" %   "2.2.1" % "test" withSources() withJavadoc(),
      "io.spray" %%  "spray-json" % "1.3.0" withSources() withJavadoc(),
      "com.github.nscala-time" %% "nscala-time" % "1.6.0" withSources() withJavadoc() ,
    "com.novocode" % "junit-interface" % "0.10" % "test" withSources() withJavadoc(),
      "ch.qos.logback" % "logback-classic" % "0.9.28" % "test" withSources() withJavadoc(),
    "org.slf4j" % "slf4j-nop" % "1.6.4" withSources() withJavadoc()

testOptions += Tests.Argument(TestFrameworks.JUnit, "-q", "-v", "-s", "-a")

parallelExecution in Test := false

logBuffered := false

scalacOptions ++= Seq("-unchecked", "-deprecation", "-feature")

Here is my plugins.sbt

// The Typesafe repository
resolvers += "Typesafe repository" at ""

//workaround for enablePlugins error in sbt  
//dependencyOverrides += "org.scala-sbt" % "sbt" % "0.13.7"

//Play sbt plugin for Play projects
addSbtPlugin("" % "sbt-plugin" % "2.2.6")

addSbtPlugin("com.typesafe.sbteclipse" % "sbteclipse-plugin" % "2.4.0")

addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.11.2")

and lastly here is my


I've read quite a few stackoverflow posts trying to solve this, including deleting the ~/.sbt directory and then running sbt again and this didn't work.

EDIT: The error message

lazy val root = (project in file(".")).enablePlugins(play.PlayScala)
sbt.compiler.EvalException: Type error in expression
    at sbt.compiler.Eval.checkError(Eval.scala:384)
    at sbt.compiler.Eval.compileAndLoad(Eval.scala:183)
    at sbt.compiler.Eval.evalCommon(Eval.scala:152)
    at sbt.compiler.Eval.evalDefinitions(Eval.scala:122)
    at sbt.EvaluateConfigurations$.evaluateDefinitions(EvaluateConfigurations.scala:272)
    at sbt.EvaluateConfigurations$.evaluateSbtFile(EvaluateConfigurations.scala:110)
    at sbt.Load$.sbt$Load$$loadSettingsFile$1(Load.scala:710)
    at sbt.Load$$anonfun$sbt$Load$$memoLoadSettingsFile$1$1.apply(Load.scala:715)
    at sbt.Load$$anonfun$sbt$Load$$memoLoadSettingsFile$1$1.apply(Load.scala:714)
    at scala.Option.getOrElse(Option.scala:120)
    at sbt.Load$.sbt$Load$$memoLoadSettingsFile$1(Load.scala:714)
    at sbt.Load$$anonfun$loadFiles$1$2.apply(Load.scala:721)
    at sbt.Load$$anonfun$loadFiles$1$2.apply(Load.scala:721)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at scala.collection.TraversableLike$
    at sbt.Load$.loadFiles$1(Load.scala:721)
    at sbt.Load$.discoverProjects(Load.scala:732)
    at sbt.Load$.discover$1(Load.scala:545)
    at sbt.Load$.loadTransitive(Load.scala:574)
    at sbt.Load$.loadProjects$1(Load.scala:442)
    at sbt.Load$.loadUnit(Load.scala:446)
    at sbt.Load$$anonfun$18$$anonfun$apply$11.apply(Load.scala:281)
    at sbt.Load$$anonfun$18$$anonfun$apply$11.apply(Load.scala:281)
    at sbt.BuildLoader$$anonfun$componentLoader$1$$anonfun$apply$4$$anonfun$apply$5$$anonfun$apply$6.apply(BuildLoader.scala:91)
    at sbt.BuildLoader$$anonfun$componentLoader$1$$anonfun$apply$4$$anonfun$apply$5$$anonfun$apply$6.apply(BuildLoader.scala:90)
    at sbt.BuildLoader.apply(BuildLoader.scala:140)
    at sbt.Load$.loadAll(Load.scala:334)
    at sbt.Load$.loadURI(Load.scala:289)
    at sbt.Load$.load(Load.scala:285)
    at sbt.Load$.load(Load.scala:276)
    at sbt.Load$.apply(Load.scala:130)
    at sbt.Load$.defaultLoad(Load.scala:36)
    at sbt.BuiltinCommands$.doLoadProject(Main.scala:481)
    at sbt.BuiltinCommands$$anonfun$loadProjectImpl$2.apply(Main.scala:475)
    at sbt.BuiltinCommands$$anonfun$loadProjectImpl$2.apply(Main.scala:475)
    at sbt.Command$$anonfun$applyEffect$1$$anonfun$apply$2.apply(Command.scala:58)
    at sbt.Command$$anonfun$applyEffect$1$$anonfun$apply$2.apply(Command.scala:58)
    at sbt.Command$$anonfun$applyEffect$2$$anonfun$apply$3.apply(Command.scala:60)
    at sbt.Command$$anonfun$applyEffect$2$$anonfun$apply$3.apply(Command.scala:60)
    at sbt.Command$.process(Command.scala:92)
    at sbt.MainLoop$$anonfun$1$$anonfun$apply$1.apply(MainLoop.scala:98)
    at sbt.MainLoop$$anonfun$1$$anonfun$apply$1.apply(MainLoop.scala:98)
    at sbt.State$$anon$1.process(State.scala:184)
    at sbt.MainLoop$$anonfun$1.apply(MainLoop.scala:98)
    at sbt.MainLoop$$anonfun$1.apply(MainLoop.scala:98)
    at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:17)
    at sbt.MainLoop$.next(MainLoop.scala:98)
    at sbt.MainLoop$.run(MainLoop.scala:91)
    at sbt.MainLoop$$anonfun$runWithNewLog$1.apply(MainLoop.scala:70)
    at sbt.MainLoop$$anonfun$runWithNewLog$1.apply(MainLoop.scala:65)
    at sbt.Using.apply(Using.scala:24)
    at sbt.MainLoop$.runWithNewLog(MainLoop.scala:65)
    at sbt.MainLoop$.runAndClearLast(MainLoop.scala:48)
    at sbt.MainLoop$.runLoggedLoop(MainLoop.scala:32)
    at sbt.MainLoop$.runLogged(MainLoop.scala:24)
    at sbt.StandardMain$.runManaged(Main.scala:53)
    at xsbt.boot.Launch$$anonfun$run$1.apply(Launch.scala:57)
    at xsbt.boot.Launch$.withContextLoader(Launch.scala:77)
    at xsbt.boot.Launch$.run(Launch.scala:57)
    at xsbt.boot.Launch$$anonfun$explicit$1.apply(Launch.scala:45)
    at xsbt.boot.Launch$.launch(Launch.scala:65)
    at xsbt.boot.Launch$.apply(Launch.scala:16)
    at xsbt.boot.Boot$.runImpl(Boot.scala:32)
    at xsbt.boot.Boot$.main(Boot.scala:21)
    at xsbt.boot.Boot.main(Boot.scala)
[error] sbt.compiler.EvalException: Type error in expression
[error] Use 'last' for the full log.

by Chris Stewart at January 30, 2015 06:17 PM


Is this combinatorial optimisation problem similar to any known problem?

The problem is as follows:

We have a two dimensional array/grid of numbers, each representing some "benefit" or "profit." We also have two fixed integers $w$ and $h$ (for "width" and "height".) And a fixed integer $n$.

We now wish to overlay $n$ rectangles of dimensions $w \times h$ on the grid such that the total sum of the values of cells in these rectangles is maximized.

The following picture is an example of a two-dimensional grid with two such rectangles overlayed on it (the picture does not demonstrate the optimal solution, just one possible overlaying where $w = h = 2$ and $n = 2$)

Grid example

The rectangles cannot intersect (otherwise we would just need to find the optimal position for one rectangle and then put all the rectangles in that position.)

In the example above the total sum of values in cells would be $-2 + 4.2 + 2.4 + 3.14 + 2.3 -1.4 + 1 - 3.1$

Is this similar to any known problem in combinatorial optimisation? so that I can start doing some reading and try to find ways to solve it.

Some more background for those interested:

So far the only ideas I had are either a greedy algorithm (which would find the best location for the first rectangle, then find the non-overlapping loctaion for the second rectangle etc.) or some metaheuristic such as genetic algorithms.

In reality I wish to solve this problem with a grid which has around a million cells and tens of thousand (or even hundreds of thousands) of rectangles, though it is not necessary to solve it in a short time (i.e it would be acceptable for the algorithm to take hours or even days.) I am not expecting an exact solution but I want to get one which is as good as possible given these constraints.


by fiftyeight at January 30, 2015 06:14 PM



Scala function evaluation

In below code :

object typeparam {

  val v = new MyClass[Int]                        //> v  : typeparam.MyClass[Int] = typeparam$MyClass@17943a4

  def f1(a : Int) = {
    println("f here")

  }                                               //> f1: (a: Int)Int

  v.foreach2(f1)                                  //> foreach2

  class MyClass[A] {

    def foreach2[B](f: B => A) = {



Why is function f1 not invoked within function foreach2 ?

If I instead use

object typeparam {

  val v = new MyClass[Int]                        //> v  : typeparam.MyClass[Int] = typeparam$MyClass@14fe5c

  def f1() = {
    println("f here")
  }                                               //> f1: ()Unit

  v.foreach2(f1)                                  //> f here
                                                  //| foreach2

  class MyClass[A] {

    def foreach2[B](f: Unit) = {



The function f1 appears to get evaluated before foreach2 , as "f here" is printed before "foreach2". Why is this the case ?

by blue-sky at January 30, 2015 06:03 PM


Kleiner Nachschlag zu meinen Ausführungen von vor ...

Kleiner Nachschlag zu meinen Ausführungen von vor zwei Jahren: Told you so!
BMW-Fahrzeuge mit der Ausstattung ConnectedDrive ließen sich innerhalb weniger Minuten per Mobilfunk öffnen. Dies war durch Sicherheitslücken bei den sogenannten Remote Services möglich. Über diesen Dienst kann man die Fahrertür seines Autos per Handy-App entriegeln. Durch die Sicherheitslücken konnte dies auch ein Angreifer ohne Zutun des Fahrzeugbesitzers tun.
BMW wird das jetzt fixen. Indem sie eine neue Firmware hochladen. Ja, von außen, auf die verkauften Autos. Da lädt BMW aus der Ferne eine neue Firmware drauf. Alleine das wäre für mich schon ein Grund, solche Systeme nicht haben zu wollen, dass man da von außen per Funk neue Software drauftun kann, ohne dass der Besitzer das mitkriegt oder prüfen oder stoppen kann.

January 30, 2015 06:01 PM

Kuba wittert Morgenluft bei den Verhandlungen mit den ...

Kuba wittert Morgenluft bei den Verhandlungen mit den USA und fordert u.a. gleich mal Reparationen für die Blockade und dass die USA Guantanamo Bay zurückgeben.

January 30, 2015 06:01 PM

Ein Justizopfer konnte sich von der Gutachterin Schmerzensgeld ...

Ein Justizopfer konnte sich von der Gutachterin Schmerzensgeld erklagen. Das ist soweit ich weiß eine relativ seltene Sache, denn das Gericht muss dazu zu dem Schluss kommen, dass sie grob fahrlässig gegutachtet hat. In diesem Fall ging es um den Vorwurf, der Mann habe seine Ex-Pflegetochter missbraucht. Dafür saß der Mann knapp 2 Jahre unschuldig im Gefängnis.

January 30, 2015 06:01 PM


What is the most efficient way to generate a random permutation from probabilistic pairwise swaps?

The question I am interested in is related to generating random permutations. Given a probabilistic pairwise swap gate as the basic building block, what is the most efficient way to produce a uniformly random permutation of $n$ elements? Here I take "probabilistic pairwise swap gate" to be the operation which implements a swap gate between to chosen elements $i$ and $j$ with some probability $p$ which can be freely chosen for each gate, and the identity otherwise.

I realise this is not usually the way one generates random permutations, where usually one might use something like a Fisher-Yates shuffle, however, this will not work for the application I have in mind as the allowed operations are different.

Clearly this can be done, the question is how efficiently. What is the least number of probabilistic swaps necessary to achieve this goal?


Anthony Leverrier provides a method below which does indeed produce the correct distribution using $O(n^2)$ gates, with Tsuyoshi Ito providing another approach with the same scaling in the comments. However, the best lower bound I have so far seen is $\lceil \log_2(n!) \rceil$, which scales as $O(n\log n)$. So, the question still remains open: Is $O(n^2)$ the best that can be done (i.e. is there a better lower bound)? Or alternatively, is there a more efficient circuit family?


Several of the answers and comments have proposed circuits which are comprised entirely of probabilistic swaps where the probability is fixed at $\frac{1}{2}$. Such a circuit cannot solve this problem for the following reason (lifted from the comments):

Imagine a circuit which uses $m$ such gates. Then there are $2^m$ equiprobable computational paths, and so any permutation must occur with probability $k 2^{−m}$ for some integer k. However, for a uniform distribution we require that $k 2^{−m}=\frac{1}{n!}$, which can be rewritten as $k n! = 2^m$. Clearly this can't be satisfied for an integer value of $k$ for $n\geq3$, since $3|n!$ (for $n\geq 3$, but $3\nmid 2^m$.

UPDATE (from mjqxxxx who is offering the bounty):

The bounty being offered is for (1) a proof that $\omega(n \log n)$ gates are required, or (2) a working circuit, for any $n$, that uses less than $n(n-1)/2$ gates.

by Joe Fitzsimons at January 30, 2015 05:59 PM


installing JDK8 on Windows XP - advapi32.dll error

I downloaded JDK8 build b121 and while trying to install I'm getting the following error:

the procedure entry point RegDeleteKeyExA could not be located in the dynamic link library ADVAPI32.dll

The operating system is Windows XP, Version 2002 Service Pack 3, 32-bit.

by yashhy at January 30, 2015 05:47 PM



clojure laziness: prevent unneded mapcat results to realize

Consider a query function q that returns, with a delay, some (let say ten) results.

Delay function:

(defn dlay [x]
    (Thread/sleep 1500)

Query function:

(defn q [pg]
   (let [a [0 1 2 3 4 5 6 7 8 9 ]]
     (println "q")
     (map #(+ (* pg 10) %) (dlay a)))))

Wanted behaviour: I would like to produce an infinite lazy sequence such that when I take a value only needed computations are evaluated

Wrong but explicative example:

(drop 29 (take 30 (mapcat q (range))))

If I'm not wrong, it needs to evaluate every sequence because it really doesn't now how long the sequences will be.

How would you obtain the correct behaviour?

My attempt to correct this behaviour:

(defn getq [coll n]
   (nth coll (quot n 10))
   (mod n 10)))

(defn results-seq []
  (let [a (map q (range))]
    (map (partial getq a)
         (iterate inc 0)))) ; using iterate instead of range, this way i don't have a chunked sequence


(drop 43 (take 44 (results-seq)))

still realizes the "unneeded" q sequences.

Now, I verified that a is lazy, iterate and map should produce lazy sequences, so the problem must be with getq. But I can't understand really how it breaks my laziness...perhaps does nth realize things while walking through a sequence? If this would be true, is there a viable alternative in this case or my solution suffers from bad design?

by akela47 at January 30, 2015 05:39 PM




Value in Auxiliary Constructor

I understand that values/vars cannot be created in an auxiliary constructor. So how does one utilize apply or some other technique to allow the following code to work?

Also its somewhat of a requirement to not hack it by moving the value creation to inside the this(), of course I realize this is a possibility.

class DistanceCalculator(context: GeoApiContext) {    

    def this() {
        val context = new GeoApiContext()
          .setConnectTimeout(1, TimeUnit.SECONDS)
          .setReadTimeout(1, TimeUnit.SECONDS)
          .setWriteTimeout(1, TimeUnit.SECONDS)


Gabor informed me values can come after calling this() but I am uncertain the following would be the right way.

class DistanceCalculator(var context: GeoApiContext) {

    def this() {


        this.context = new GeoApiContext()
          .setConnectTimeout(1, TimeUnit.SECONDS)
          .setReadTimeout(1, TimeUnit.SECONDS)
          .setWriteTimeout(1, TimeUnit.SECONDS)


by BAR at January 30, 2015 05:31 PM



Are regular grammars always LR(1)

The question is fairly straight forward. I just found a question on the internet that asks whether all regular grammars are

  1. LL(1)

  2. LR(1)

I guess they can't be LL(1) because of left recursion, but how do we prove that they are LR(1) if so.

by anirudh at January 30, 2015 05:12 PM


Microsoft hat eine neue Outlook-App released. Da kann ...

Microsoft hat eine neue Outlook-App released. Da kann man auch Fremd-Emailaccounts mit benutzen. Dann greift die App aber nicht direkt drauf zu sondern schiebt die Accountdaten in die US-Cloud.

January 30, 2015 05:01 PM

Die neue griechische Regierung kündigt ihre Zusammenarbeit ...

Die neue griechische Regierung kündigt ihre Zusammenarbeit mit der Troika auf. Das ist ein sehr richtiger Schritt.

Wenn dir jemand 10 Euro schuldet, ist das sein Problem. Wenn dir jemand 100 Milliarden Euro schuldet, ist das dein Problem. Die griechische Regierung hat aus meiner Sicht überhaupt keinen Grund, sich da irgendwas erzählen zu lassen. Jetzt ist es an der Zeit, die Kredite der Reihe nach platzen zu lassen, und das der EU gegenüber als Friedensangebot zu verkaufen, dass man die der Reihe nach platzen lässt und nicht alle auf einmal. Mehr als unfreundlich gucken kann die Merkel da kaum machen. Greif mal einem nackten Mann in die Tasche!

January 30, 2015 05:01 PM


a question :swaption

I have a little question: You buy a swaption starting in 6 months onwards as you are the payer of the fixed rate features:1.10 v EUR6months, premium 0.10. At expiry of the swaption,the market conditions for a similar Swap are:1.05 v EUR6months,what will you do?why?


by celia at January 30, 2015 04:58 PM


Struggling to fix my dual graphs (planar, directed) implementation.

I'm new to dual graphs in algorithms and am currently trying to implement a planar graph data structure that will compute all of the faces on the graph.

The way it works now:

The graph is represented as a map that links vertex with a linked list of edges sorted by their cyclic ordering. Because faces are independent of the direction of the edge, inside each list will also be edges going into the vertex, but with a weight of infinity.

When each node in the linked list is constructed, it takes an edge and also creates an inverse node and stores the reference. The inverse node (which contains the inverse edge e.g. (1 -> 2 with weight 5) becomes (2 -> 1 with weight inf)) is needed to find the location of itself in the adjaceny linked list of its destination vertex.

For example:

Vertex 1 points to 2, 3, 5 in that order.

Vertex 2 points to 6, 1, 3 in that order.

When I follow the edge (1,2), I also need the inverse node of (1,2) so that I can get the edge direction before (1,2) in vertex 2's adjaceny list (vertex 6 in this case).

Anyway, that's how the structure works at the moment. Right now, I'm struggling to implement the face finding algorithm correctly.

Here is a picture of the graph that I'm using for testing. Included is also the adjaceny list. Here is the code for my currently algorithm.

def makeFaces(self): def follow(node, used, dest): edge = node.edge if edge.tail == dest: return [] if edge not in used: used[edge] = False elif used[edge] is False: used[edge] = True else: return [] next = (~node).prev return [next.edge.head] + follow(next, used, dest) used = {} faces = [] for vertex in self.adj: for node in self.adj[vertex]: edge = node.edge if len(faces) == self.faceCount - 1: return faces if edge not in used or used[edge] is False: faces.append([edge.head] + follow(node, used, vertex)) 

I don't think this is very common knowledge, so note that the ~ operator in Python is used to get the inverse of an object. I previously explained above what the inverse is in this case. Also, faceCount is the number of faces as per Euler's Formula (f = e + 2 - v); I subtract one because of the outside face.

I use a map of edges to determine how many times an edge has been found (not in the map for never found, false for found once, true for found twice). I've concluded that an edge can only be used twice, because an edge can be part of, at most, two faces.

The problem: My algorithm kind of works and I think the idea is right. However, if you'll look at my drawing above, it correctly finds the first face (in black), but then it tries to find a face using edge (1 -> 8). Because of this, it follows the graph all the way around the perimeter until it gets to vertex 1 again. I need a way to determine whether an edge should only be used once, or whether it should be used twice (i.e. whether an edge is part of one face or two).

In this case, my algorithm outputs:

[[(0, 1), (3, 0), (4, 4), (2, 3)], [(0, 1), (2, 3), (4, 4), (6, 2), (8, 4), (8, 2), (8, 0), (6, 2), (3, 0)], [(6, 2), (3, 0), (0, 1), (2, 3)], [(6, 2), (8, 0), (8, 2)]] 

Thank you for taking the time to read my post and consider the solution. I can post more code (of structures and such) if needed.

submitted by PastyPilgrim
[link] [1 comment]

January 30, 2015 04:57 PM

High Scalability

Stuff The Internet Says On Scalability For January 30th, 2015

Hey, it's HighScalability time:

It's a strange world...exotic, gigantic molecules Fit Inside Each Other like Russian nesting dolls
  • 1.39 billion: Facebook Monthly Active Users; $18 billion profit: Apple in 3 months; 200 million: Kik users; 11.2 billion: age of the oldest known solar system; 3 billion: videos viewed per day on Facebook
  • Quotable Quotes:
    • @kevinroose: This dude wins SF bingo. RT @caro: An Uber driver is Airbnb'ing the trunk of his Tesla for $85/night.
    • @BenedictEvans: Only 16% of Facebook DAUs aren't using it on mobile
    • @rezendi: Yo's Law: "in the 21st century tech industry, satire and reality are not merely indistinguishable but actually interchangeable."
    • Brent Ozar: I recommend that people back up data, not servers.
    • @AnnaPawlicka: "Shared State is the Root of All Evil"
    • Peter Lawrey: micro-day - about 1/12 of a second. micro-century - 51.3 minutes. femto-parsec - about 30 metres.
    • TapirLiu: OH: docker is like a condom to protect your computer from Node.
    • @DigitCurator: "The Next Decade In Storage": Resistive RAM promises better scaling, efficiency, and 1000x endurance of flash memory 
    • @BenedictEvans: At the end of 2014 Apple had ~650-675m live iOS devices. With zero unit sales growth, 700-720m by end 2015. Consumer PCs in use - 7-800m
    • @MailChimp: We sent 14.1 billion emails in December, including 741 million on Cyber Monday.
    • @mjpt777:  That's in the past. We can now do 20 million per second :-) per stream.
    • @bradwilsonConclusions: 1. Ethernet over power does not perform as well as WiFi (??) 2. Ethernet over power hates being shared among multiple PCs
    • @mjpt777: Specialized Evolution of the General-Purpose CPU  - note that performance per watt is approx doubling per generation. 
    • @nighitingale: "The Earth is 4.6 billion years old. Scaling to 46 years, humans have been here 4 hours, the industrial..."
    • Joseph Campbell: The hero’s journey always begins with the call. One way or another, a guide must come to say, “Look, you’re in Sleepy Land. Wake. Come on a trip."
    • Frank Herbert: the most persistent principles of the universe were accident and error.

  • Will Facebook ever figure out this mobile thing? Not long ago that was the big question. We have an answer. In the fourth quarter, the percentage of its advertising revenue from mobile devices increased to 69%, up from 66% in the third quarter and 53% a year earlier. Mobile daily active users were 745 million on average for December 2014, an increase of 34 percent year-over-year.

  • The power of smart: Facebook’s Powerful Ad Tools Grew Its Revenue 25X Faster Than User Count. Facebook might be running out of people, but they aren't running out of ways of monetizing those people. Math grows faster than users.

  • The Cathedral of Computation by Ian Bogost. Agree in part. There does seem to be an uncritical acceptance of algorithms, as if because they enliven machines they are some how pure and objective, when the opposite is the case. Algorithms are made for human purposes by teams of humans and show the biases and hubris of their makers. And like all creatures, algorithms should be subject to skepticism, law, and review.

  • We have many long running debates in tech. Server side vs client side rendering is just one of them. A thoughtful analysis: Tradeoffs in server side and client side rendering by Malte Ubl.  Bret Slatkin boldly claims: Experimentally verified: "Why client-side templating is wrong". He concludes: I hope never to render anything server-side ever again. I feel more comfortable in making that choice than ever thanks to all this data. I see rare occasions when server-side rendering could make sense for performance, but I don't expect to encounter many of those situations in the future.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

by Todd Hoff at January 30, 2015 04:56 PM



How can I handle a > 22 column table with Slick using nested tuples or HLists?

I'm new to Scala (using 2.10) and Slick (using 2.0-M2). I see that one of the ways to get around the 22 column limit for tables in Slick is to use nested tuples. I can't figure out how to do that, despite finding this partial code on GitHub.

Current dev branch Scala (2.11-M5) supports case classes with more than 22 elements, but not tuples with arity > 22. And Slick is not yet distributed for Scala 2.11 pre-releases. How can I define a 33 column table (and have it work with all Slick's syntactic sugar)?

N.B., I'm trying to support an existing schema and can't change the table normalization.

by sventechie at January 30, 2015 04:52 PM

Printing Future Date in Scala

I'm trying to get the future date in scala. Here is my code

 val today = java.util.Calendar.getInstance().getTime()

 val future = Calendar.getInstance()
  future.add(Calendar.DAY_OF_YEAR, 7)

  val futureDate = future.getTime()

  val yearFormat = new SimpleDateFormat("YYYY")
  val monthFormat = new SimpleDateFormat("MM")
  val dateFormat = new SimpleDateFormat("DD")

  val currentYear = yearFormat.format(today)  
  val currentMonth = monthFormat.format(today)
  val currentDate = dateFormat.format(today)

  val futureYear = yearFormat.format(futureDate)  
  val futureMonth = monthFormat.format(futureDate)
  val futureDay = dateFormat.format(futureDate)

  println("Current Year :"+currentYear)
  println("Current Month :"+currentMonth)
  println("Current Date :"+currentDate)
  println("Future Year :"+futureYear)
  println("Future Month :"+futureMonth)
  println("Future Date :"+futureDay)

Code is simple. I want to add 7 days from today's date and print the date. When I run this. It prints the future date in correctly

Current Year :2015
Current Month :01
Current Date :30
Future Year :2015
Future Month :02
Future Date :37

Please correct me what am I missing. I'm new to Scala

by user3015109 at January 30, 2015 04:42 PM

Calling a web service that depends on a result from another web service using Play! 2.2 with scala

I want to call a webservice and use the response that is returned to call an other webservice.

I already looked at this post (and a lot of others) : same question than me

What I have made so far is, (it compiles without warnings)

def returnIdsOfPlace() = {
  for {
    respA <- WS.url("")
    ids: List[String] = returnListOfIdsFromResponse(respA)
    respB <- Future.sequence( =>
      WS.url("" + id + "&limit=200&type=page&access_token=ajshlajshljh")
  } yield {
      println(respA.json) => println(response.json))

I have no compilation error and it prints nothing in the console, not even the respA.json although if I remove the respB part, it correctly prints the response respA...

I have read the doc a lot of times and have been searching on SO a long time as well but without results...

What am I missing? Thanks ;)

by user3683807 at January 30, 2015 04:41 PM

Ansible: How to differentiate the hosts between two host-groups

My hosts file is like



And I want to run two different .yml files separately for each of these hosts using - include app.yml and - include db.yml from a main.yml file

To differentiate between the hosts I used when: "{{ groups['app'] }}" and when: "{{ groups['db'] }}", but its not working properly. I'm also not sure whether its a right approach or not

# cat main.yml 
 - include: app.yml
   when: "{{ groups['app'] }}"
 - include: db.yml 
   when: "{{ groups['db'] }}"

by ras at January 30, 2015 04:41 PM



Reference for Nuclear Norm Relaxations

I have seen a bunch of results concerning Matrix Completion, PCA, Compressed Sensing where a common theme has been to relax the Rank constraint/objective by replacing it with Nuclear Norm. I was wondering if there is a survey of some sort which collects these results, compares them and presents the basic underlying technique. I havent read the original papers yet so they might be the best reference but the purpose of the question is to know if there are other easier to understand references to get started on this topic with.

Thanks in advance

by NAg at January 30, 2015 04:36 PM


assign tag and public for instance via amazonica

I'm using amazonica to create an ami and then launch an instance from the ami when it's ready.

The problem I'm having with amazonica is that it has about zero documentation (that I can find), apart from what's on readme. And what's on ready is very little and covers very little.

I can currently successfully look at running instances, grab latest / required instance, create an AMI off of it, wait until that's ready, and then launch that instance.

Only, the (run-instance) method takes in I don't know what arguments. Looking at the java api doc I have figured out most of parameters with some trial and error but I still need to set a few more things.

Where can I find what parameters to pass to this function?

Currently, I have:

(run-instances :image-id ami-id
             :min-count 1
             :max-count 1
             :instance-type "t2.small"
             :key-name "api-key-pair"
             :sercurity-groups ["sg-1a2b3c4d"]
             ;:vpc-id "vpc-a1b2c3d4"
             :subnet-id "subnet-a1b2c3d4"
             :monitoring true
             :ebs-optimized false
             :tag [{:value instance-name 
                    :key "Name"}])

And this sets most things. But I can't figure out how to set:

  • tag - I want to set a tag name: "prod-1.0"
  • security groups. I've tried the one above, and this:

     :security-groups [{:group-id "sg-1a2b3c4d" 
                 :group-name "SG_STRICT"}]

but no use. Either the instance has default group, or, I get a strange errors like

...AmazonServiceException: The specified instance type can only be used in a VPC. A subnet ID or network interface ID is required to carry out the request


....s.AmazonServiceException: The security group '{:group-id "sg-1a2b3c4d", :group-name "SG_STRICT"}' does not exist

I've gone through that whole doc page a couple of times and can't find any other sensible options / keywords to pass.

I also want to start the instance with auto-assign-public-ip option too.

The source doesn't reveal much on amazonica, unfortunately, as the doc says it uses reflections heavily and tests aren't very elaborate.

so How do I set a security group and tags for this, please?

by LocustHorde at January 30, 2015 04:35 PM


I'm obsessed with figuring out how this is made. (x-post from r/gamedev)

Someone suggested I post this here. looks like a great place to get some insight.

I've become obsessed with figuring out how this is achieved: [1] It has similarities to reaction diffusion; the pattern emerges and continues to improve (its subtle). The colors bleed into each other with the same behavior. But unlike reaction diffusion, its geometric and rigid. Reaction diffusion algorithms always end up with an organic shape. Someone in the other thread also pointed out one of his pieces is tagged with 'game of life.' So I'll be exploring cellular automata.

this is the one that original sparked my interest, but I'm focusing on his previous works because this is overwhelming. But it does indicate to me that his previous works are utilizing interesting algorithms as posed to hand animated or image manipulations: [2]

Any insight would be much appreciated! thanks,

heres some more:

submitted by dannyREDDIT
[link] [4 comments]

January 30, 2015 04:33 PM

Planet Theory

Some pictures in geometric probability

As I discussed in a previous blog post, I have been recently interested in models of randomly growing networks. As a starting point I focused my attention on the preferential attachment rule and its variants, in part because its ubiquity in the literature. However these models miss one key feature, namely the notion of geometry: when a newcomer arrives in the network, she can connect to any vertex, regardless of a possible spatial organization of the network and her current position in that space.

In this blog post I want to discuss some of the most famous (or infamous in some cases) spatial models of network growth. For simplicity, and also to draw pictures, everything will be described in the plane, and specifically on the lattice \mathbb{Z}^2. These models iteratively build an aggregate A_n \subset \mathbb{Z}^2 as follows: First A_0 = \{(0,0)\}. Then A_{n+1}=A_n \cup \{X_n\} where X_n is a random point in the boundary of the aggregate, that is X_n \in \partial A_n := \{x \in \mathbb{Z}^2 : \min_{y \in A_n} \|x-y\|_2 =1\}. Thus a model is defined by the probability measure one puts on the boundary of the aggregate to select the next point to add.

All the pictures are with half a million particles, and the particles are colored as a function of their age, with blue corresponding to old particles, and red corresponding to young ones.

Eden model

The simplest model uses the uniform measure on the boundary, and it is known as the Eden model. Here is a picture:


One of the most basic result about this model is that it admits a limit shape:

Theorem: There exists a (deterministic) convex set B \subset \mathbb{R}^2 such that for any \epsilon >0,

    \[\lim_{n \to +\infty} \mathbb{P} \left( (1- \epsilon) B \subset \frac{1}{\sqrt{n}} A_n \subset (1+\epsilon) B \right) = 1.\]

It is known that B is not an Euclidean ball (this should be clear from the picture), though nobody knows what B is exactly. How do you prove such a theorem? Well it turns out to be pretty easy once you have the right machinery. The first step is to realize that the Eden model exactly corresponds to first passage percolation with exponential clocks: imagine that you have i.i.d. exponential random variables t(e) on the edges of \mathbb{Z}^2, and consider A(t) \subset \mathbb{Z}^2 to be the set of points x such that there exists a path e_1, \hdots e_r from (0,0) to x with \sum_{s=1}^r t(e_s) \leq t. In other words A(t) is the set of points reached at time t by a fluid released at time 0 in (0,0) and with travel times on the edges given by the random variables t(e). It is an easy exercise to convince yourself that A_n has the same distribution as A(t) conditionally on |A(t)| = n (this is actually not true, one in fact needs to put the travel times on the vertices of \mathbb{Z}^2 rather than on the edges, but let me get away with that small mistake). At this point the continuous version of the above theorem can be rather easily proved via Kingman’s subadditive ergodic theorem, see the Saint Flour lecture notes by Kesten for more details.


Diffusion Limited Aggregation (DLA)

Here we consider the harmonic measure from infinity: pick a point of \mathbb{Z}^2 uniformly at random from a large circle that contains A_n, and then start a random walk from this point until it hits \partial A_n (which will happen eventually since a simple random walk in two dimensions is recurrent); let X_n be the latter point. Here is a global picture and a zoom (the same pictures in black and white are provided at the end of this section):

DLA   DLAzoom

Absolutely nothing is known about this model apart from the following simple result of Kesten:

    \[\limsup_{n \to +\infty} \frac{\max_{x \in A_n} \|x\|_2}{n^{2/3}} < + \infty .\]

A fascinating open problem is to show that for some \epsilon>0,

    \[\liminf_{n \to +\infty} \frac{\max_{x \in A_n} \|x\|_2}{n^{0.5+\epsilon}} = + \infty ,\]

which the above picture clearly seems to validate. Note that at the moment nobody can even prove that once rescaled by 1/\sqrt{n} the DLA will not converge to an Euclidean ball…




Ballistic DLA

Here is a model that, as far as I know, was introduced by Ronen Eldan. Instead of having one particle coming from infinity to hit the aggregate through a random walk, imagine now that the aggregate is constantly bombarded (ballistically). Precisely particles are coming from infinity in every direction (that is on every line) at a constant rate. In other words first select a direction uniformly at random, and then among the lines parallel to that direction that hit the current aggregate choose one uniformly at random. The added point is the boundary point at the intersection of this (oriented) line with the aggregate. Here is a picture where we can see some resemblance with the Eden model:


and here is a picture from further away where we can see local resemblance with DLA:


see also this zoom in black and white:


Needless to say, nothing is known about this model.


Internal Diffusion Limited Aggregation (IDLA)

Finally the IDLA model is yet another modification of DLA, where instead of starting the random walk from infinity one starts it from the origin. Perhaps surprisingly, and on the contrary to everything discussed so far, we know quite a lot about this model! Its limit shape is an actual honest Euclidean ball, see this paper by Lawler, Bramson and Griffeath. In fact we even know that the average fluctuation from the aggregate around its limit is of constant order, see this paper by Jerison, Levine and Sheffield (information about the distribution of these fluctuations is also provided!).


Writing this blog post was quite a bit of fun, and I thank Ronen Eldan, Laura Florescu, Shirshendu Ganguly, and Yuval Peres from whom I learned everything discussed here. To conclude the post here are some intriguing pictures from variants of the above model that I cooked up (unfortunately I’m not sure the models are very interesting though):



by Sebastien Bubeck at January 30, 2015 04:31 PM


Relaxation of the null production restriction in Regular and Context Free Grammars

I am convinced of the fact that allowing productions of the form $S \rightarrow \epsilon$ in a context sensitive grammar would allow RE languages to be expressed if $S$ were on the right hand side of some production.

However, in most definitions of regular and context-free grammars (such as those on Wikipedia), this restriction is nowhere to be seen (as an exception, the restriction is mentioned in the article on the Chomsky hierarchy for Type-3 languages).

  1. Is this relaxation intentional in that both variants (with and without the restriction) have the same expressive power, or is this just a "sloppy" definition?

  2. Even if they do have the same expressive power, wouldn't that ensure that Context Free Grammars are not a subset of Context Sensitive Grammars, thus violating Chomsky's Hierarchy?

by peteykun at January 30, 2015 04:30 PM

Why is Radix Sort $O(n)$?

In radix sort we first sort by least significant digit then we sort by second least significant digit and so on and end up with sorted list.

Now if we have list of $n$ numbers we need $\log n$ bits to distinguish between those number. So number of radix sort passes we make will be $\log n$. Each pass takes $O(n)$ time and hence running time of radix sort is $O(n \log n)$

But it is well known that it is linear time algorithm. Why?

by Pratik Deoghare at January 30, 2015 04:17 PM

Algorithm to find shortest path between two nodes

I want an algorithm similar to Dijkstra or Bellman-Ford for finding the shortest path between two nodes in a directed graph, but with an additional constraint.

The additional constraint is that there are $N$ sets of special edges with weight $0$ such that a path is not considered valid if it traverses one edge in a set but not the remaining edges in that set.

Note that these $N$ sets of edges are disjoint such that no edge belongs to more than one of the $N$ sets. The number of edges in each set is around 3 to 6. And there are a large number of these sets. About $50\%$ of edges belong to a set, the rest don't belong to a set and can be traversed normally.

Does such an algorithm exist? Can such an algorithm exist that is better than brute force?

by dan at January 30, 2015 04:12 PM


Does "Productive function" mean just that in ME O'Neill, The Genuine Sieve of Eratosthenes?

M.E. O'Neill in the Epilogue of "The Genuine Sieve of Eratosthenes" (preprint DOI 10.1017/S0956796808007004) quotes Richard Bird that "union is defined in the way it is in order to be a productive function."

Is that the technical term referring to productive sets and creative sets, or is it just a manner of speaking?

The function union is defined there in the following Haskell code:

primes = 2:([3..] `minus` composites)
    composites = union [multiples p | p <- primes]

multiples n = map (n*) [n..]

(x:xs) `minus` (y:ys) | x< y = x:(xs `minus` (y:ys))
                      | x==y = xs `minus` ys
                      | x> y = (x:xs) `minus` ys

union = foldr merge []
    merge (x:xs) ys = x:merge' xs ys

    merge' (x:xs) (y:ys) | x< y = x:merge' xs (y:ys)
                         | x==y = x:merge' xs ys
                         | x> y = y:merge' (x:xs) ys

I haven't learned about productive sets. If it's helpful for understanding this point, I will.

by minopret at January 30, 2015 04:12 PM


Why is a superscalar processor SIMD?

  1. From

    In Flynn's taxonomy, a single-core superscalar processor is classified as an SIMD processor (Single Instructions, Multiple Data),

    Flynn's taxonomy is based on number of instruction streams and number of data streams.

    A superscalar processor can run more than one instructions at a time, so why isn't it MIMD?

    If there is a valid reason for a superscalar processor to be SI, then why isn't there a similar reason which makes it single-data (i.e. SISD)?

  2. Instruction pipeline also runs more than one instruction at one time. Is it SISD or SIMD?
  3. Does "SI $\equiv$ only one CPU core"? Or only one implies the other?

    Does "MI $\equiv$ more than one CPU cores"? Or only one implies the other?


by Tim at January 30, 2015 04:11 PM

Limiting capacity of knapsack to a polynomial function of elements in the Knapsack problem

I saw somewhere that if we limit the capacity (weight) of the knapsack to a polynomial function of elements then the class of the problem changes to P, but it didn't say why. I can't figure out why is it true....

I think a found a solution but i'm not sure if it is completely right. The time complexity of the knapsack problem is equal to $O(W \cdot n)$, where $W$ is the capacity of the knapsack, so if we limit it to a polynomial function of $n$ then the complexity will change to $O(P(n) \cdot n)$ which is a polynomial complexity...

by Hamed Hemati at January 30, 2015 04:11 PM

How to extract patterns of inputs?

I need to extract patterns. For exemple:

<input type="text" name="ex1">
<input maxlength="10" name="ex2">

Extracted file:

ex1: type=text
ex2: maxlength=10

How can I do it? what methods can I use?

by Paulo Costa at January 30, 2015 04:08 PM


ZeroMQ (0MQ), how to connect the client to a remote server?

I've implemented a simple ZeroMQ (0MQ) server and client. It works well when I use them on a machine (local). But when I run the client on one another machine, it doesn't work (it cannot connect to the remote server). I've checked my firewall and it's inactive (in Ubuntu 14.04). My server code written in java is:

ZMQ.Socket responder = context.socket(ZMQ.REP);

and the client code:


in which "ipaddress" is the IP address of my server. I've tried also different port numbers.

Please, explain what is the problem and what do you suggest to solve the problem??

Thanks in advance

by Abr Rjb at January 30, 2015 04:04 PM



AWS Expansion – WorkSpaces, Directory Service, ElastiCache, GovCloud, Kinesis, Traditional Chinese, More

We’ve increased the geographic footprint and international relevance of several AWS services this past month. Here’s a summary:

For more information on service availability, please take a look at our Products and Services by Region page.



by Jeff Barr at January 30, 2015 03:54 PM



Is the complexity of this path problem known?

Instance: An undirected graph $G$ with two distinguished vertices $s\neq t$, and an integer $k\geq 0$.

Question: Does there exist an $s-t$ path in $G$, such that the path intersects at most $k$ triangles? (For this problem a path is said to intersect a triangle if the path contains at least one edge from the triangle.)

by Andras Farago at January 30, 2015 03:31 PM


What benefits would taking the AI cluster in my undergraduate CA degree give me?

Hi. I'm taking my first AI class this quarter and it's awesome! It's all probability: we've leaned to apply bayes theorem, product rule, marginalization, and recently the probability of maximum likelihood.

My university only offers a 3 course cluster for AI in undergraduate and I don't think I'll be pursuing a master degree for some time. Plus, I am on the quarter system so a 3 course cluster here is equivalent to a 2 course cluster at a semester school. Would finishing this cluster help me pursue software engineering in any way?

submitted by like-a-bbas
[link] [5 comments]

January 30, 2015 03:29 PM


Why do some architectures use a CMP instruction before branching while others just branch?

My initial guess is that you will have more instruction space for the immediate in your branch instruction when you first use a CMP instruction. However, you have to use 2 instructions each time you want to branch: a CMP and then the branch instruction.

So are there any advantages to using CMP?

by model world at January 30, 2015 03:18 PM

How Big data and approximation algorithms are related? [on hold]

I want to purse my studies in big data and I am very new to this field, I have doubt that whether I take a course in approximation algorithms or not! are these two related to each other?

by Hadi Amiri at January 30, 2015 03:13 PM



Scala Builder Pattern: illegal cyclic reference involving type T

I'm trying to write some generic builders for my User class hierarchy. I have a trait, UserBuilder and each "with" method in the trait has to return the same type as the current class. So if I'm inside the ComplexUserBuilder, the withId method should return a ComplexUserBuilder and not a UserBuilder.

But I'm getting

illegal cyclic reference involving type T

Is there a way to workaround this?

Here is my code:

trait UserBuilder[T >: UserBuilder[T]] {

  var id: String = ""

  def withId(id: String): T = { = id
    return this

class ComplexUserBuilder extends UserBuilder[ComplexUserBuilder] {

  var username: String = ""

  def withUsername(username: String): ComplexUserBuilder = {
    this.username = username
    return this

  def build = new ComplexUser(id, username)

By the way, if I replace trait UserBuilder[T >: UserBuilder[T]] with trait UserBuilder[T >: UserBuilder[_]] I get:

type arguments [model.ComplexUserBuilder] do not conform to trait UserBuilder's type parameter bounds [T >: model.UserBuilder[_]]


trait UserBuilder[T >: UserBuilder[T]]

should be (as GClaramunt suggested)

trait UserBuilder[T <: UserBuilder[T]]

but now there is an ugly cast as the return type

by Danix at January 30, 2015 03:12 PM

Ansible - YAML gotcha with an integer

I'm using Ansible to deploy a webapp. I'd like to wait for the application to be running by checking that a given page returns a JSON with a given key/value.

I want the task to be tried a few times before failing. I'm therefore using the combination of until/retries/delay keybwords.

Issue is, I want the number of retries to be taken from a variable. If I write :

  retries: {{apache_test_retries}}

I fall into the usual Yaml Gotcha (

If, instead, I write:

  retries: "{{apache_test_retries}}"

I'm being said the value is not an integer.

ValueError: invalid literal for int() with base 10: '{{apache_test_retries}}'

Here is my full code:

- name: Wait for the application to be running
  register: res
  sudo: false
  when: updated.changed and apache_test_url is defined
  until: res.status == 200 and res['json'] is defined and res['json']['status'] == 'UP'
  retries: "{{apache_test_retries}}"
  delay: 1

Any idea on how to work around this issue? Thanks.

by Alexis Seigneurin at January 30, 2015 03:08 PM

Python like package name aliasing in Scala

I know that in Scala you can alias things inside package like that: import some.package.{someObject => someAlias}

Is there a way of creating alias for package name, not for classes/objects inside it ?

For example in Python you can do: import package as alias

by myhau at January 30, 2015 03:06 PM


Don’t Share Code Between Microservices

To clarify, I don’t necessarily agree with the author wholesale, I just wanted to hear people’s thoughts on the topic.


by zacbrown at January 30, 2015 03:04 PM


add dependency to scala SBT project (import play.api.libs.json.Json)

I'm new to Scala and SBT.

I'm following this example: Play Framework

import play.api.libs.json.Json

val json: JsValue = Json.parse("""
"user": {
"name" : "toto",
"age" : 25,
"email" : "",
"isAlive" : true,
"friend" : {
  "name" : "tata",
  "age" : 20,
  "email" : ""

How do you put the dependency for this library in the build.sbt file?

I'm using the Intellij scala IDE community edition.


by scalauser at January 30, 2015 03:03 PM

Planet Clojure

We released the basis for our clojure microservices

Recently we released tesla-microservice to github. It is a software written in clojure and it is the basis of some of the microservices we are working on as part of the technical platform of We named our software after Nikola Tesla an ingenious engineer and inventor of the late 19th and early 20th century.

tesla-microservice is based on the component library, an elegant and quite minimal framework to build stateful applications in clojure.

Currently tesla-microservice allows you to build a basic web application with some basic features:

  • Load config from classpath and/or filesystem.
  • Aggregate a status based on the status of the different components and their subcomponents.
  • Deliver status details as json.
  • Serve a simple healthcheck based on that status.
  • Report to graphite using the metrics library.
  • Manage routes using compojure.
  • Serve content with an embedded jetty.

Nikola Tesla

Nikola Tesla. 1856-1943


In the first place we developed tesla-microservice for a prototypical project. It has later lost its prototypical status and will be a permanent part of the infrastructure. As other teams are starting to use it as basis for their own microservices, we decided to publish the code. This owes to our shared nothing guideline which says that the different teams at should not share any code except they do it publicly. That strategy protects us from hidden inter-team-dependencies and business logic sneaking into library code.

We did not primarily design tesla-microservice to be a general purpose framework, but rather tailored it to fit our specific needs at If you are looking for a groundwork to build your own application on top, you might want to have a look at, duct  or system. These are also based on the component framework which gained quite some popularity over the last year.

Having said that, we would be quite excited to learn that tesla-microservice proves to be useful in other scenarios, too. If you learned something from it or even use it, let us know. Do not hesitate to contact us  with questions or suggestions for improvements.  We would love to hear from you.

We will likely publish additional components in the future. Examples would be access to zookeeper, mongodb and redis. So stay tuned for more!

by Christian Stamm at January 30, 2015 02:56 PM


Understanding the definition of SPMD

From Wikipedia

SPMD (single program, multiple data) is a technique employed to achieve parallelism; it is a subcategory of MIMD. Tasks are split up and run simultaneously on multiple processors with different input in order to obtain results faster.

  1. Does a program running on SPMD runs as either multiple processes or a process with multiple threads? If former, are the multiple processes runs the same program?

  2. Wikipedia tries to compare SPMD with SIMD:

    In SPMD, multiple autonomous processors simultaneously execute the same program at independent points, rather than in the lockstep that SIMD imposes on different data. With SPMD, tasks can be executed on general purpose CPUs; SIMD requires vector processors to manipulate data streams. Note that the two are not mutually exclusive.

    What does "at independent points" mean? Does point means time? If yes, isn't "at independent points" contradict "simultaneously"?

  3. Wikipedia also compares SPMD with SMP:

    Unlike SPMD, shared memory multiprocessing, also called symmetric multiprocessing or SMP, presents the programmer with a common memory space and the possibility to parallelize execution by having the program take different paths on different processors. The program starts executing on one processor and the execution splits in a parallel region, which is started when parallel directives are encountered. In a parallel region, the processors execute a single program on different data. A typical example is the parallel DO loop, where different processors work on separate parts of the arrays involved in the loop. At the end of the loop, execution is synchronized, only one processor continues, and the others wait. The current standard interface for shared memory multiprocessing is OpenMP. It is usually implemented by lightweight processes, called threads.

    From Section 6.3 of Computer Organization and Design, Fifth Edition: The Hardware/Software interface by David A. Patterson, John L. Hennessy:

    programmers normally write a single program that runs on all processors of an MIMD computer, relying on conditional statements when different processors should execute different sections of code. This style is called Single Program Multiple Data (SPMD), but it is just the normal way to program a MIMD computer.

    I don't quite understand how SPMD and SMP are different as stated in Wikipedia, and maybe some paraphrasing may help? In particular, Wikipedia says in SMP, a program can take different paths on different processors. In Patterson's book, in SPMD, a program can also run different sections of code on different processors.

    Wikipedia says "The current standard interface for shared memory multiprocessing is OpenMP". Is multithreading by OpenMP or PThread on multiple processors SPMD or SMP?


by Tim at January 30, 2015 02:54 PM



Create and use group without restart

I have a task, that creates a group.

- name: add user to docker group
  user: name=USERNAME groups=docker append=yes
  sudo: true

In another playbook I need to run a command that relies on having the new group permission. Unfortunately this does not work because the new group is only loaded after I logout and login again.

I have tried some stuff like:



newgrp docker; newgrp

But nothing worked. Is there any change to force Ansible to reconnect to the host and does a relogin? A reboot would be the last option.

by Mark at January 30, 2015 02:38 PM


Which program performs better on input size 10000

Program A and program B both takes an array as input. The performance of A and B in seconds is listed below for different sizes of arrays.

$Array size$ $\hspace{1cm}$ time taken by $A$ $\hspace{1cm}$ time taken by $B$

2 $\hspace{3.4cm}$ 0.18 $\hspace{3cm}$ $3.2* 10^{-7}$

100 $\hspace{3.2cm}$5.0 $\hspace{3cm}$ 0.07

1000 $\hspace{3cm}$ 50 $\hspace{3cm}$ 8.00

5000 $\hspace{3cm}$ 125.8 $\hspace{3cm}$ 20.00

Which one will perform better on an array of size 10000 ? Can some one help me ?

by Kumar at January 30, 2015 02:31 PM




How to prevent using var in this scala code?


I have scala code using JSON4s:

val json = parse(jsonStr)

var result = ""

val resultJson = json \ "result"
if (resultJson != JNothing) {
    val resultStr = resultJson.extract[String].trim
    result = if (resultStr.length > 0) resultStr else ""

How to change the code to use val instead of var for the result variable?


Alright, here is the real deal:

def foo(list: List[Map[String, Any]], list2: List[String]) {

    var params = Map[String, Any]()

    val buffer = new ListBuffer[String]

    for ((a, i) <- list.zipWithIndex) {

      val ai = s"actor$i"

      buffer += s"""MATCH (a:ACTOR {id: "${a.getOrElse("id","")}"}) SET a += {$ai}"""

      params += (ai -> a)

    if (list2.length > 0) {
      buffer += s"""
         MATCH (a:Actor)
         WHERE in ["${list2.mkString("""", """")}"]
         OPTIONAL MATCH (a:Actor)-[r]-()
         DELETE a, r

    val query = buffer.mkString("WITH 1 AS _")

    // perform operation with the "query" and "params" here
    // ...

You can see I'm using var at var params = Map[String, Any](). The params is used for the query's named parameter if you're curious.

The question is still same, how to change the code to not to use var?

by suud at January 30, 2015 02:23 PM


Cheapness indicator for Convertibles Bonds

What indicator (or combination of those) could be used to roughly estimate the cheapness of a convertible bonds ?

Like the price/earning ratio for equities.

Thanks, Max.

by Maxime at January 30, 2015 02:22 PM


Dave Winer

Dynamic metadata in web pages

Here's the scenario..

  1. I have a web app that's used to display content dynamically. The page containing the app is static, stored in an S3 bucket.

  2. The app doesn't have any data in it, just code and a few DOM objects. It's a liveblog reader. Think of it as a container that can be used to display lots of different stuff.

  3. When it starts up, it loads the data from a file, displays it.

  4. The URL parameter contains the ID of an item within the file it's displaying. It moves the cursor there. Here's an example, a post about cheese.

  5. The user then posts the link on Facebook, the one from #4 above. The user thinks they're posting a link to their story, not to my app. They barely realize my app exists. It's all about what they've written.

  6. When Facebook displays the post, I readers to see a description of the item that's being pointed to, what readers will see when they click the link.

  7. So what I need to do is change the og:url, og:title, og:description, and og:image elements in the <head> when the app starts up. And (this is what doesn't work) have Facebook recognize the changes.

  8. I want links to items to be passed around the net just like all links are. I could of course provide a programmatic way to post a link to Facebook, and that could have whatever info I want it to, but -- I love the magical scraping FB does. I just want it to get it right.


Here's a demo app that illustrates.

And a Facebook post about the feature.

And I'm accumulating notes here on my liveblog. This shit is useful!

January 30, 2015 02:20 PM


Find a quarrel-free seating order with a greedy algorithm

I'm revising for an Algorithms exam and looking at a sample question it says :

A group of n teenagers $t_1, \dots, t_n$ are to sit in a single row of n chairs watching a particulary boring comedy movie. Some teenagers quarrel with each other all the time. The Problem is to devise a seating arrangement for the group in such a way that teenagers sat next to each other do not quarrel.

Propose a solution to this problem using the Greedy approach. Estimate the complexity of the resulting algorithm.

In lectures for greedy problems we've only covered Knapsack Problems so Next Fit/Best Fit for Bin Packing. I can't seem to understand how these methods have any relevance to coming up with a solution for the question.

Obviously I don't expect anyone to answer this, since I've not even made an attempt. But in honesty I don't know where to start. If you guys could give me some sort of hints or just general advice because I'm pretty stranded at the minute.

by user26234 at January 30, 2015 02:17 PM

Planet Emacsen


Fred Wilson

Fun Friday: Comedy Hour

I just realized we haven’t done a fun friday since mid December. It’s gotten way too serious around here. I’m sorry about that. So we are going to rectify that by posting our favorite comedy routines (youtube embeds or anything else that will work in the comment thread).

Here’s my contribution. I recently saw Jason Mantzoukas in Sleeping With Other People and he just cracks me up. So I spent some time on YouTube just now looking for something good from Jason. This bit about making french press coffee is spot on and is why I never make coffee that way.

So now I’ve made my contribution to fun friday. It’s time for yours.

by Fred Wilson at January 30, 2015 02:12 PM


Attempt to add annotation to defrecord defined class in macro

I'm attempting to create a macro similar to the Quartzite defjob macro that creates the Job class with the @DisallowConcurrentExecution annotation added to it. The code works from the repl, but not inside the macro.

This works...

user=> (defrecord ^{DisallowConcurrentExecution true} YYY []
  #_=>   org.quartz.Job
  #_=>   (execute [this context]
  #_=>            (println "whoosh!")))
user=> (seq (.getAnnotations YYY))
(#<$Proxy3 @org.quartz.DisallowConcurrentExecution()>)

...but this does not.

(defmacro defncjob
  [jtype args & body]
  `(defrecord ^{DisallowConcurrentExecution true} ~jtype []
              (execute [this ~@args]

After Rodrigo's suggestion, here is a way to make it work.

(defmacro defdcejob
  [jtype args & body]
  `(defrecord ~(vary-meta jtype assoc `DisallowConcurrentExecution true) []
     (execute [this ~@args]

by Bill at January 30, 2015 02:10 PM




How to download files from server using Polymer framework

I want to download file that served by my back end using Polymer.
Back end is written on Scala with Play Framework.
Action responsible for serving file is working ( i've tested by entering action path in browser tab).
What i want is to get tis file by core-ajax component.
Maybe i should use something else? I cannot find any hints on the web.

This is core-ajax that i'm using.


by partTimeNinja at January 30, 2015 02:05 PM



How to make Clojure's `while` return a value

Can someone please explain why Clojure's while macro doesn't return a value?

The documentation says "Presumes some side-effect will cause test to become false/nil." Okay, so we use swap! to modify the atom which is used in the test case. But can't we still have a return value (perhaps returning repeatedly, as in the case with loop), along with the side effects?

Looking at the source, it seems like the cause could be something to do with how recursion and macros work, but I don't know enough about the internals of that to speculate.


The following code only returns nil:

(let [a (atom 0)]
  (while (< @a 10)
    (+ 1 @a)
    (swap! a inc)))

That is, it does not return the value of (+ 1 @a), nor does it return any other value or function that is inside the while loop. If we want to get some value calculated via the while loop, we could use print, but then we can't easily use the value in some other operation. For a return value, we have to use a second atom, like this:

(let [a (atom 0)
      end (atom 0)]
  (while (< @a 10)
    (swap! end #(+ 1 %))
    (swap! a inc))

by Ben Sima at January 30, 2015 01:57 PM


yahoo finance pandas etf data

I would like to fetch some ETF data from yahoo finance using pandas.

If I go onto the yahoo finance website, I can find the single ETFs (e.g. C001).

However, if I try to pull the data using python pandas, I get nothing.

df ='F','C001',start=datetime(2010,1,1),

The code works fine if I use 'yahoo' instead of 'C001'.

Is there something obvious I am doing wrong? Is there a reason why 'yahoo' works but the ETF ticker symbols don't?

Thanks a lot in advance.

by fabee at January 30, 2015 01:50 PM


LiftSession.cometSetup wrong message order. Lift 2.6

When I send messages to a Comet Actor with session.sendCometActorMessage which isn't set up yet, Lift stores messages in LiftSession.cometSetup List:

def setupComet(theType: String, name: Box[String], msg: Any) {
  testStatefulFeature {
    cometSetup.atomicUpdate(v => (Full(theType) -> name, msg) :: v)

Later, when the Comet Actor is set up, the messages are sent back to the actor in LiftSession.findComet:

for {
  actor <- ret
  (cst, csv) <- if cst == what
} actor ! csv

Notice that Lift prepends messages to cometSetup List and reads those from head to tail, hence it sends messages in reverse order. Which is, well, not what you expect from an actor.

Is it expected behaviour or a bug? Lift 2.6

by nau at January 30, 2015 01:36 PM

Purpose of `render` in json4s

In json4s examples and documentation I often see the idioms




I do not think I have actually seen an example with compact or pretty applied directly to a code generated JValue, but it is not clear to me what render is doing here. Render has type JValue => JValue and I do not see any obvious difference it makes and running

json.take(100000).filter(x => compact(render(x)) != compact(x))

on some of my data returns an empty an empty collection.

What does render actually do?

by Daniel Mahler at January 30, 2015 01:31 PM

Ramda does not provide deep mixin?

var _ = require('ramda');

var obj1 = {
    innerNum: 1,
    innerObj: {
        innerStr: 'a',
        innerStrB: 'Z'

var obj2 = {
    innerNum: 2,
    innerObj: {
        innerStr: 'b'

var mixedObj = _.mixin(obj1, obj2);

mixedIObj does not include the inner object's innerStrB. Any ramda solution?

by Daniel at January 30, 2015 01:28 PM


Straddle neutral strategy

What does it mean to implement a delta-neutral strategy for straddle ?

A straddle consists in buying a call and a put simultaneously, at the same date, on same underlying, with same maturity and strike.

It is possible to gain from a move up or down of underlying price, when move compensates for the primes.

What about a gain in a straddle strategy from delta-neutral management ?

by octoback at January 30, 2015 01:28 PM


Implicit jsonFormat for case class with varargs

I have a case class containing varargs, with an implicit jsonFormat as follows:

import spray.json._
case class Colors(name: String*)
object MyJsonProtocol extends DefaultJsonProtocol {
  implicit val colorFormat = jsonFormat1(Colors)
import MyJsonProtocol._

It raises an error:

error: type mismatch;
found   : Color2.type
required: Seq[String] => Color2
Note: implicit value colorFormat is not applicable here because it comes after the application point and it lacks an explicit result type
      implicit val colorFormat = jsonFormat1(Color2)

I have also tried:

implicit val colorFormat = jsonFormat1(Colors.apply)

which caused a different (runtime!) exception:

java.lang.RuntimeException: Cannot automatically determine case class field names and order for 'Colors', please use the 'jsonFormat' overload with explicit field name specification

The following:

implicit val colorFormat = jsonFormat(Colors, "name")

raises the former error

It is even possible to define implicit jsonFormat for case class with varargs?

by mirelon at January 30, 2015 01:28 PM



Debugging Scala code with simple-build-tool (sbt) and IntelliJ

What's the easiest way to debug Scala code managed by sbt using IntelliJ's built-in debugger? The documentation at lists commands for running the main class for a project or the tests, but there seem to be no commands for debugging.

Follow-up question: what's the easiest way to attach IntelliJ's debugger to Jetty when using sbt's jetty-run command?

by Matthew at January 30, 2015 01:18 PM


Is this grammar LALR(1)

I am working through creating a LALR(1) parser generator. I have the following grammar which I designed to highlight the epsilon for rule 3.

0. S' -> A
1. A -> a B b
2. B -> b
3. B -> 

This recognizes ab and aab. It is my assumption that with one symbol of look-ahead, I would know whether to reduce using rule 3 or shift using rule 2. That is, when scanning ab, I need to reduce using rule 3 to get a B onto my stack. But when the input is abb I want to just shift the first b, then reduce it by rule 2.

As I work through creating a LALR(1) parsing table, I get the following:

 , $ , a , b , A , B 
0,   , s2,   , g1,   ,
1, a ,   ,   ,   ,   ,
2,   ,   , s5,   , g3,
3,   ,   , s4,   ,   ,
4, r1,   ,   ,   ,   ,
5,   ,   , r2,   ,   ,

The issue is that there is an ambiguity on state [2, b] A 'reduce by 3' was rejected. Of course, this is why I chose the grammar.

I calculated first and follow as:

* First sets *
A { a } epsilon=false
B { b } epsilon=true
S' { a } epsilon=false

* Follow sets *
S' { $ }
    A { $ }
B { b }

Anyway, I have implemented this as best I know how and the ambiguity remains. Am I doing something wrong? Am I expecting too much from LALR(1)?

EDIT - one possible place where I am confused could be when 'lookahead' occurs. If the lookahead symbol is only used to make a decision on whether a reduction should occur, then to be successful we require enough information after we shift the first a. Basically, if we see two b's coming, we can infer our only chance of success is to shift the first one and then reduce it to B. If there is only one B, we'll need to reduce the epsilon to get a B onto the stack. Then we'll be set up to successfully reduce aBb. By this metric, this would be an LALR(2) grammar. Does this make sense (in the scope of LALR(k) parsers,) or am I just wrong?

by Tony Ennis at January 30, 2015 01:17 PM


Where am I losing lazyness?

I am attempting to program a lazy iterative prime sieve.

Each iteration of the sieve is represented by

[[primes] (candidates) {priority-map-of-multiples-of-primes}]

and I have a function, sieve-next-candidate-update, that merrily takes a sieve iteration, checks the next candidate, and updates all three components appropriately.

Used like this:

(take 3 (iterate sieve-next-candidate-update [[2] (range 3 13 2) (priority-map 2 2)]))

I get the results I expect:

([[2] (3 5 7 9 11) {2 2}]
 [[2 3] (5 7 9 11) {3 3, 2 4}]
 [[2 3 5] (7 9 11) {5 5, 2 6, 3 6}])

However, when I run it through a reduce to remove the iterations that do not find a new prime, it attempts to process the entire sequence however I define the initial candidates list (apparent infinite loop if I use iterate).

(take 3 
   (reduce (fn [old-primes-sieves new-sieve]
             (prn (str "reduce fn called with new sieve" new-sieve))
             (if (= (last (first (last old-primes-sieves))) ; I'm aware I don't want last in the final version
                    (last (first new-sieve)))
               (conj old-primes-sieves new-sieve)))
           (iterate sieve-next-candidate-update [[2] (range 3 13 2) (priority-map 2 2)])))


"reduce fn called with new sieve[[2] (3 5 7 9 11) {2 2}]"
"reduce fn called with new sieve[[2 3] (5 7 9 11) {3 3, 2 4}]"
"reduce fn called with new sieve[[2 3 5] (7 9 11) {5 5, 2 6, 3 6}]"
"reduce fn called with new sieve[[2 3 5 7] (9 11) {7 7, 2 8, 3 9, 5 10}]"
"reduce fn called with new sieve[[2 3 5 7] (11) {3 9, 2 10, 5 10, 7 14}]"
"reduce fn called with new sieve[[2 3 5 7 11] nil {11 11, 2 12, 3 12, 7 14, 5 15}]"

and then throws a NullPointerException in this limited case.

Where am I losing lazyness?

by status203 at January 30, 2015 01:09 PM


Remote-First Communication for Project Teams


“If anyone is remote, you’re all remote.”

At Atomic Object, we value co-located teams. But not every team member can always be co-located. Larger project teams may have members from multiple offices. Some projects might involve working closely with other vendors. I experience this “remoteness” when I support the infrastructure needs of teams in our Ann Arbor and Detroit offices.

When these situations arise, it helps if your communication style is already what I would call “remote-first”.

What is remote-first communication?

Remote-first communication prioritizes communicating with those who are not here now. Whether that’s a team member who’s working from home with a bad head cold or your client on the coast, successful projects go the extra mile to communicate effectively with those who are remote.

Generally speaking, remote-first communication means preferring written, searchable methods of communication that work even when the sender and receiver aren’t engaged at the same time. This means that phone calls, while potentially much better at conveying tone and establishing emotional connections, cannot be the default method of connecting with teammates.

Remember, the group of people on your team who are “not here now” also includes anyone who might work on the project at any point in the future, including yourself. Remote-first communication has the knock-on effect of acting as a form of documentation—recording conversations had, decisions made, resources shared, etc.

Privacy Limits & No-blame Culture

Remote-first communication that clearly documents problems encountered, ideas proposed, and decisions reached works best in organizations with strong no-blame cultures. The extent to which team members fear their words being used against them in the future limits their candor in ways that can greatly impede coming to solutions. This is is generally true, but the lasting verbatim record of remote-first communication greatly amplifies the need to tend to this culture.

Remote-first Tools used by Atomic

Remote-first communication is not just about the media used, but also the way in which they are used, the patterns of communication. Teams may use a variety of tools to communicate, but remote-first communication patterns have, as a default, a medium that favors communication with remote team members.

Here are some tools I’ve seen used in this way at Atomic:

  • Trello
  • Jabber/XMPP/GTalk
  • HipChat
  • Slack
  • IRC
  • Pivotal Tracker
  • Basecamp
  • E-mail

What is important to note is that these platforms can and are used even by team members who work right next to each other.

Remote in the Open

One downside to our open office environment can be distracting noise levels from neighboring project teams. While we’re exploring ways we can change our space to mitigate such issues, remote-first communication also helps keep noise levels down.

Another benefit of remote-first communication is that it can create a space for people who prefer written communication to speaking aloud. This might be a team member who is shy, or whose first language isn’t English, or who is hard of hearing—there are a lot of reasons people might be more comfortable communicating through text. I know that I sometimes value having a moment to switch contexts and collect my thoughts before responding to an instant message, a luxury not always afforded by direct in-person questions.

Remote-first Is not Remote-always

Remote-first communication does not mean that your only communication should be written. There are certainly times when in-person conversations or phone calls will be much more effective. Particularly when it comes to delivering disappointing news or when first getting to know your client or team, it can be much better to converse in a way that allows an emotional connection.

In short, teams that use text-based communication tools as a default communication medium are better equipped for remote communication across both space and time.

How do your teams practice remote first communication?

Further Reading on Communicating with Remote Teams:

The post Remote-First Communication for Project Teams appeared first on Atomic Spin.

by Mike English at January 30, 2015 01:00 PM


Browse into Session attribute

I'm trying to check if the responses of my regex are in my JSON. For that I stored the regex response into a variable metas. Now i need to browse into this variable to compare with all the values of metas and I don't figure out how to write this. \"${metas(\"${nbr_metas}\")}\" doesn't seem to be working.

here how i am doing:

   val scn_get_content   = scenario("Test")
    .exec(http("get metas")

    .repeat("${metas.size()}", "nbr_metas") {        
    exec(http("Get JSON")


by ALS at January 30, 2015 12:50 PM



arbitrage opportunity in a two period model

I have a little problem evaluating an european call. I Suppose That the current stock price equals the strike price (10$) at t=0. The stock evolves to {10,11} each with probability 0.5 at t=1. Furthermore the riskless rate is set to r = 1,049. Now the author of my paper states that the value of the call must be V < 1/r (11 - 10) * 0.5 = 0.477 in order to avoid arbitrage. Can anyone see how this is the case?

I am beware that the stock is not valued with respect to the fair valuation after cox rubinstein which would be 10.5. Any help appreciated :)

Solution: for V > 1/r I see that i can short the call and buy a zerobond instead which lets my pay my liability in period 1 in any case.

by Dachser at January 30, 2015 12:35 PM


Deserialize json without name

I'm Scala na json4s to consume json. To deserialize I'm calling org.json4s.native.JsonMethods.parse and ExtractableJsonAstNode.extract method. This is a part of json file:

     "": {
        "atribute1": "v1",
        "instanceId": "i",

It contains attribute without name. What should be field name in case class to successfully deserialize attributes?

by KrzyH at January 30, 2015 12:30 PM



Scala SQLite Invalid Query for DELETE

I have problem when trying to add a row and then delete one row from a table. What I have now is:

   def addItem(item:Item)={
        val query = items.filter( === name)
        items += (,item.timestamp)
        if(query.list.length > 10)

It is supposed to store 10 latest items in the database by removing the oldest if there is more than 10.

But I get an error saying

SlickException : Invalid query for DELETE statement: A single source table is required, found List((s2,Comprehension))

I do have another table in the database but this should have nothing to do with that, there is not even a relation between the two tables.

Do you have any ideas what might be wrong? Or is there another way of keeping only last 10 values in the DB. The time stamp is java.sql.timestamp and I'm using Slick library for the SQLite for scala. Also the class Item is just holding a string and a timestamp.

Thanks! Any help is appreciated!

by Ou Tsei at January 30, 2015 12:08 PM



Install tomcat and enable AUTHBIND via apt module of ansible doesn't start tomcat service

This is a strange issue but it happens, on a fresh ubuntu (14.04 in my case) when I install tomcat7 using apt module of ansible it installs successfully.

After installing tomcat I'm also enabling AUTHBIND and creating byport directories as well (which is needed in Ubuntu 14).

Problem appears when I start tomcat service after enabling AUTHBIND, the service doesn't start and when I comment the line "AUTHBIND" in file: /etc/default/tomcat7 the service starts fine.

Also, when I use command module to install tomcat and rest of the code is intact (setting up AUTHBIND and byport directories) everything works fine so the difference is just the "apt" and "command" modules to install tomcat.

Anyone any suggestion to dig this issue down?

by ankit tyagi at January 30, 2015 11:34 AM


Real time data map about the amount of a currency that are held in the world ?

Where can I see in real time data about the amount of a currency that is held in central banks (and maybe other significant places) ? A map would be great.

I would like to know if there is an institution that watches where the currency reserves are held for the different currencies out there.

I would specialy like to know where is US dollar held around the world

I have another related question : knowing that oil and US dollar are directly correlated : can oil reserves be considered as US currency reserves in a certain way ?

I quickly tried the FMI website but did not find anything.

FYI : I'm just trying to get myself a personal point of view about the recent dollar value raise, and I know peanuts about economics and finance : don't hesitate to prove me wrong :) .

Thanks, Julian

by Julian at January 30, 2015 11:28 AM

Planet Theory

TR15-015 | Multi-$k$-ic depth three circuit lower bound | Neeraj Kayal, Chandan Saha

In a multi-$k$-ic depth three circuit every variable appears in at most $k$ of the linear polynomials in every product gate of the circuit. This model is a natural generalization of multilinear depth three circuits that allows the formal degree of the circuit to exceed the number of underlying variables (as the formal degree of a multi-$k$-ic depth three circuit can be $kn$ where $n$ is the number of variables). The problem of proving lower bounds for depth three circuits with high formal degree has gained in importance following a work by Gupta, Kamath, Kayal and Saptharishi (FOCS 2013) on depth reduction to high formal degree depth three circuits. In this work, we show an exponential lower bound for multi-$k$-ic depth three circuits for any arbitrary constant $k$.

January 30, 2015 11:28 AM


FreeBSD package installation offline

I am trying to learn FreeBSD and have been trying to install xorg-minimal, gedit and libreoffice offline for a couple of weeks now (read manual) and just keep going around in circles. It is a new install of FreeBSD 10. Is there anyone here who will take the time to help and go through the basics for me?

I have saved xorg-minimal-7.5.2.tbz and gedit and libreoffice to disk and also succeeded in installing pkg-1.8.3.

During my last attempt, I edited a /usr/local/etc/pkg/repos/FreeBSD.conf like this:

FreeBSD: {
  enabled: no

and then edited /usr/local/etc/pkg/repos/<fileName>.conf like this:

file name: {
  url: file:///.../.../.../<packages>/
  enabled: yes

When I try to use pkg install, I get errors like these:

pkg: file:/.../.../meta.txz : No such file or directory
pkg: repository ... has no meta file,
pkg: file:/.../.../digests.txz: No such file or directory
pkg: ///xorg-minimal-7.5.2.tbz is not a valid package: no manifest found

Like I said, I have tried so many things, I am starting to feel a little punch drunk and it would not surprise me if I am leaving out some critical step.

by David at January 30, 2015 11:24 AM


Play 2.2 - specs2 - How to test futures in play 2.2?

my way of testing futures was using value1. I migrated to play2.2. I found out, my accustomed way to test is gone. @scala.deprecated("Use scala.concurrent.Promise instead.", "2.2")

Any help would be greatly appreciated.


by OliverKK at January 30, 2015 11:19 AM

guava - Iterables avoid iterating multiple times

I have a list of objects that contains two string properties.

public class A {
    public String a;
    public String b;

I want to retrieve two Sets one containing property a and one b.

The naive approach is something long these lines:

List<A> list = ....
Set<String> listofa = new HashSet<>();
Set<String> listofb = new HashSet<>();
for (A item : list) {
    if (item.a != null) 
    if (item.b != null) 


Trying to do in a functional way in guava I ended up with this approach:

Function<String,A> getAFromList = new Function<>() {
    public String apply(@Nullable A input) {
        return input.a;

Function<String,A> getBFromList = Function<>() {
    public String apply(@Nullable A input) {
        return input.b;

FluentIterable<A> iterables = FluentIterable.from(list);

Set<String> listofAs = ImmutableSet.copyOf(iterables.transform(getAFromList).filter(Predicates.notNull()));

Set<String> listofBs = ImmutableSet.copyOf(iterables.transform(getBFromList).filter(Predicates.notNull()));

However this way I would iterate twice over the list.

Is there any way how to avoid iterating twice or multiple times ?

In general how does one solve these uses cases in a functional way in general (not only in guava/java) ?

by Ümit at January 30, 2015 11:16 AM

Making Future based API wrappers around Akka perform better (ask is a memory hog)

I've been playing around with an Akka (v 2.3.8) actor prototype to implement a local cache, and I'm having issues getting it to perform as well as another local cache implementation.

The API of this Akka based local cache returns Future[_] instances for operations such as put, get...etc, so it's an async API whereas the other local cache implementation returns V, or void, so it's sync.

Initially, I had opted to use per-request actor with Akka ask pattern so that when the per-request actor returns a response, the Future is completed. When running performance tests agains the other local cache implementation, this Akka based approach consumes 3x the amount of memory (350mb vs 1GB) as the local cache implementation, and performs way worse (5x-6x worse). I've noticed there's a lot of short-lived garbage generated by Akka in this approach, and one of the biggest consumers seems to be java.lang.reflect.Field instances generated by this stacktrace:

java.lang.reflect.ReflectAccess.copyField(Field) sun.reflect.ReflectionFactory.copyField(Field) java.lang.Class.copyFields(Field[]) java.lang.Class.getDeclaredFields(), Object, String, Object)$class.akka$actor$dungeon$FaultHandling$$finishTerminate(ActorCell), ActorContext, ActorRef)

Essentially, seems like an Actor's fields are cleared when it terminates, but if you're using a per-request actor, this seems to create a lot of garbage.

I've also tried another approach where I get rid of the per-request actor, and instead, the message that I pass to the hierarchy supervisor contains a Promise. When the supervisor receives a response from the laywers down, it completes the Promise. This approach generates a lot less garbage, and in fact, it consumes roughly the same memory as the other local cache implementation. However, it still does not perform. The CPU for the Akka based prototype with this new approach shows being used at roughly 60%.

With some thread dumps I realised that completing the Promise is a slow process, and the supervisor seems unable to cope with the amount of requests coming this way, so you see a lot of these stacktraces in the thread dumps:

"" #23 prio=5 os_prio=0 tid=0x00007fb4b0428800 nid=0x17c1 runnable [0x00007fb512312000] java.lang.Thread.State: RUNNABLE at sun.misc.Unsafe.unpark(Native Method) at java.util.concurrent.locks.LockSupport.unpark( at java.util.concurrent.locks.AbstractQueuedSynchronizer.unparkSuccessor( at java.util.concurrent.locks.AbstractQueuedSynchronizer.doReleaseShared( at java.util.concurrent.locks.AbstractQueuedSynchronizer.releaseShared( at scala.concurrent.impl.Promise$CompletionLatch.apply(Promise.scala:73) at scala.concurrent.impl.Promise$CompletionLatch.apply(Promise.scala:67) at at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:63) at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:78) at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55) at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55) at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72) at scala.concurrent.BatchingExecutor$ at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:599) at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:106) at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:597) at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40) at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248) at scala.concurrent.Promise$class.complete(Promise.scala:55) at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:153) at actor.LocalActorSupervisor$$anonfun$receive$1.applyOrElse(LocalActorSupervisor.scala:38)

So, the question is this: Are there any other approaches to improving performance for a Future[_] based API wrapping Akka? In other words, how can promise instances be completed in such way that is fast and does not hog down memory? Any ideas?

by Galder Zamarreño at January 30, 2015 11:07 AM


FreeBSD 10.1 sound not working

I am new to BSD and I am experimenting with getting an old(ish) Packard Bell desktop machine out of mothballs. The basic installation of FreeBSD 10.1-RELEASE for amd64 seems to have gone fine, but I am stuck trying to get the sound card working.

The hardware is based on a Gigabit motherboard hosting an Intel P4 processor, with integrated audio that uses the Intel High Definition Audio chipset. Under Linux the audio driver selected is snd_hda_intel, and the expectation under FreeBSD was that the snd_hda module would drive the audio.

After installing FreeBSD the sound was not working, but I followed the instructions available online in a number of places, using kldload to experiment with different drivers. I found that the amd64 GENERIC kernel already has sound and many sound drivers pre-loaded, so efforts to use kldload were misplaced. However, there is no record of any driver installed in the /dev/sndstat file, and attempts to make noise by typing cat /random > /dev/dsp return error messages saying the operation is not allowed.

What I want to know is, am I wasting my effort to get an ageing sound card working, or is there some trick I have missed?

by Bobble at January 30, 2015 11:02 AM


Amplifying a Locality Sensitive Hash

I'm trying to build a cosine locality sensitive hash so I can find candidate similar pairs of items without having to compare every possible pair. I have it basically working, but most of the pairs in my data seem to have cosine similarity in the -0.2 to +0.2 range so I'm trying to dice it quite finely and pick things with cosine similarity 0.1 and above.

I've been reading Mining Massive Datasets chapter 3. This talks about increasing the accuracy of candidate pair selection by Amplifying a Locality-Sensitive Family. I think I just about understand the mathematical explanation, but I'm struggling to see how I implement this practically.

What I have so far is as follows

  1. I have say 1000 movies each with ratings from some selection of 1M users. Each movie is represented by a sparse vector of user scores (row number = user ID, value = user's score)
  2. I build N random vectors. The vector length matches the length of the movie vectors (i.e. the number of users). The vector values are +1 or -1. I actually encode these vectors as binary to save space, with +1 mapped to 1 and -1 mapped to 0
  3. I build sketch vectors for each movie by taking the dot product of the movie and each of the N random vectors (or rather, if I create a matrix R by laying the N random vectors horizontally and layering them on top of each other then the sketch for movie m is R*m), then taking the sign of each element in the resulting vector, so I end with a sketch vector for each movie of +1s and -1s, which again I encode as binary. Each vector is length N bits.
  4. Next I look for similar sketches by doing the following
    1. I split the sketch vector into b bands of r bits
    2. Each band of r bits is a number. I combine that number with the band number and add the movie to a hash bucket under that number. Each movie can be added to more than one bucket.
    3. I then look in each bucket. Any movies that are in the same bucket are candidate pairs.

Comparing this to 3.6.3 of mmds, my AND step is when I look at bands of r bits - a pair of movies pass the AND step if the r bits have the same value. My OR step happens in the buckets: movies are candidate pairs if they are both in any of the buckets.

The book suggests I can "amplify" my results by adding more AND and OR steps, but I'm at a loss for how to do this practically as the explanation of the construction process for further layers is in terms of checking pairwise equality rather than coming up with bucket numbers.

Can anyone help me understand how to do this?

by Philip Pearl at January 30, 2015 10:58 AM


Yield from a Tuple

Tuples and yield's: pretty common constructs. But to my surprise the following combination of them together is not so obvious how to make work:

 val edges = ((1,2),(2,3),(3,4),(4,5),(1,6),(3,8),(4,9),(5,10),

  val edgesw = for (e <- edges) yield (e._1, e._2, 1.0)
   // e is interpreted as "any"
   // therefore e._1 and e._2 are invalid / do not compile


Adding the type parameters seems to help.. but why was it needed?

  val edgesw = for (e: (Int, Int) <- edges) yield (e._1, e._2, 1.0)

Another update I neglected the Seq / Array notation!

 val edges = Seq((1,2),(2,3),(3,4),(4,5),(1,6),(3,8),(4,9),(5,10),

Now the behavior is as expected:

  val edgesw = for (e <- edges) yield (e._1, e._2, 1.0)
 edgesw: Seq[(Int, Int, Double)] = List((1,2,1.0), (2,3,1.0), (3,4,1.0), (4,5,1.0), (1,6,1.0), (3,8,1.0), (4,9,1.0), (5,10,1.0), (1,7,1.0), (4,10,1.0))

by javadba at January 30, 2015 10:57 AM

Sequentially combine arbitrary number of futures in Scala

I'm new to scala and I try to combine several Futures in scala 2.10RC3. The Futures should be executed in sequential order. In the document Scala SIP14 the method andThen is defined in order to execute Futures in sequential order. I used this method to combine several Futures (see example below). My expectation was that it prints 6 but actually the result is 0. What am I doing wrong here? I have two questions:

First, why is the result 0. Second, how can I combine several Futures, so that the execution of the second Future does not start before the first Future has been finished.

val intList = List(1, 2, 3)

val sumOfIntFuture = intList.foldLeft(Future { 0 }) {
 case (future, i) => future andThen {
  case Success(result) => result + i 
  case Failure(e) => println(e)

sumOfIntFuture onSuccess { case x => println(x) }

by Chrisse at January 30, 2015 10:49 AM

How to fully qualify a var's value?

Imagine that:

(def my-var 'my-symbol)  ;; Please note that it must be 'my-symbol not `my-symbol

my-var ;; => my-symbol

But I want

;; => fully-qualified/my-symbol

Other than converting values to strings, is it possible to fully qualify my-var's value? Thanks.

by roboli at January 30, 2015 10:43 AM

How to flatten list inside RDD?

Is it possible to flatten list inside RDD? For example convert:

 val xxx: org.apache.spark.rdd.RDD[List[Foo]]


 val yyy: org.apache.spark.rdd.RDD[Foo]

How to do this?

by zork at January 30, 2015 10:27 AM


Union of finite and non-regular language [duplicate]

This question already has an answer here:

Question: ($B$ and $C$ are languages) $B$ is finite,$C$ isn't regular:

Prove/Disprove: $C\cup B$ isn't regular.

Thoughts: My intuition says this is true, but I need an idea to prove it. Since I don't know if $C$ as a CFG or RE language I don't know what kind of machine I can build for it.

by user1685224 at January 30, 2015 10:25 AM


Scala/Akka WSResponse recursively call

Im trying to parse some data from an API

I have a recursion method that calling to this method

 def getJsonValue( url: (String)): JsValue = {
 val builder = new com.ning.http.client.AsyncHttpClientConfig.Builder()
 val client = new
 val newUrl = url.replace("\"", "").replace("|", "%7C").trim
 val response: Future[WSResponse] = client.url(newUrl).get()
 Await.result(response, Duration.create(10, "seconds")).json

Everything is working well but after 128 method calls i'm getting this warning

WARNING: You are creating too many HashedWheelTimer instances.  HashedWheelTimer is a shared resource that must be reused across the application, so that only a few instances are created.

After about 20 More calls im getting this exception

23:24:57.425 [main] ERROR com.ning.http.client.AsyncHttpClient - Unable to instantiate       provider com.ning.http.client.providers.netty.NettyAsyncHttpProvider.  Trying other providers.
23:24:57.438 [main] ERROR com.ning.http.client.AsyncHttpClient - Failed to create a selector.


1.Im assuming that the connections didnt closed ?? and therefore i can't create new connections.

2.What will be the correct and the safe way to create those HTTP calls

by MIkCode at January 30, 2015 10:23 AM


Derive logitboost using the logistic loss function

An additive model constructed using the exponential loss function

L(y, f (x))=exp(−yf (x))

gives Adaboost. How can we derive the corresponding additive model (known as logitboost) using the logistic loss function

L(y, f (x)) = log(1 + exp(−yf (x))).

What steps I should take to do the above proof?

by find-missing-semicolon at January 30, 2015 10:12 AM


What does ()=> mean in Scala?

I understand that a "call by name" argument is defined as foo(arg: => T) but, what does this mean?

def foo(block: => T) = {
  List(1, 2, 3).map(_ => ()=>block)

Specially I don't understant the ()=> part.

Wouldn't it be enough to write map(_ => argByName) ?

by hanbzu at January 30, 2015 10:11 AM


What machine learning classifiers are the most parallelizeable?

What machine learning classifiers are the most parallelizeable? If you had a difficult classification problem, limited time, but a decent LAN of computers to work with, what classifiers would you try?

Off hand it looks to me like some standard classifiers I know of stack up as follows but I could be totally wrong:

Random Forests - Very parallelizeable as long as each machine can hold all the data (i.e. can't divide up the training data per se, but otherwise parallelizeable).

Boosting - ?

Support Vector Machine - Not very parallelizable.

Decision trees - Can be divided up in part, but not very efficiently.

by John Robertson at January 30, 2015 10:08 AM


Complement Threshold Function Degree Value

Anti-Threshold function is given by - $ATh_k^n(x)=1\iff x_1+\dots+x_n\le k$

What is degree of one-sided and two-sided error approximation of real polynomial of $ATh_k^n(x)$?

That is:

Given $\epsilon\in(0,\frac{1}{2})$, what is minimum degree polynomial such that $$|p_{\epsilon}(x)-ATh_k^n(x)|\leq\epsilon?$$

Given $\epsilon\in(0,{1}{})$, what is minimum degree polynomial such that $$ATh_k^n(x)=0\implies p_{0,\epsilon}(x)=0$$ $$ATh_k^n(x)=1\implies |p_{0,\epsilon}(x)-1|\leq\epsilon?$$

Given $\epsilon\in(0,{1}{})$, what is minimum degree polynomial such that $$ATh_k^n(x)=1\implies p_{1,\epsilon}(x)=1$$ $$ATh_k^n(x)=0\implies |p_{1,\epsilon}(x)|\leq\epsilon?$$

I looked at Jukna's book and found that this is a symmetric function. Beyond that I could not gather much information on approxmation degrees.

I am particularly interested in knowing whether $deg(p_{0,\epsilon}(x))\leq k$ and $deg(p_{1,\epsilon}(x))= n$ is possible?

From Paturi's work, I also believe $\sqrt{n}\leq deg(p_{\epsilon}(x))$.

So does $$deg(p_{\epsilon}(x))\leq deg(p_{a,\epsilon}(x))$$ where $a\in\{0,1\}$ hold with symmetric functions?

by Turbo at January 30, 2015 10:08 AM


Play Framework. Last get var corrupted

Play Framework. Scala.


GET /admin/users/:page/:pageSize/:filter     controllers.admin.UserController.userList(page:Int, pageSize:Int, filter:String)


def userList(page: Int, pageSize: Int, filter: String): EssentialAction = isAuthenticatedFuture {...}

Each time I request similar page I have 2 requests where first request is correct

(e.g /admin/users/1/20/admin) 

and second one is corrupted

(e.g. /admin/users/1/20/Action(parser=BodyParser(anyContent)))

So in second (not expected) request I have "Action(parser=BodyParser(anyContent))" for the last veriable. No changes if I use

GET /admin/users/:page/:pageSize     controllers.admin.UserController.userList(page:Int, pageSize:Int)
def userList(page: Int, pageSize: Int): EssentialAction = isAuthenticatedFuture {...}

I still have 2 requests: first is fine and second with error:


(In this case I can see casting error in firebug console but page renders fine. so first request returns 200 OK, second one has status code 400(Bad Request) because expected Int found String as last param)

Why do I have 2 requests instead one and second one with corrupted last veriable?

Thanks in advance

by Alex at January 30, 2015 10:07 AM


Advantages of ANN classifiers over the AdaBoost

So what are the advantages of ANN classifiers over the AdaBoost or Boosting algorithm?

by Daria at January 30, 2015 10:02 AM


Clojure , pigpen, unable to access function parameter

I have problem accesing function parameter in a pig/map function. Here is an exemple

 (defn date-interval
  ([start end] (date-interval start end []))
  ([start end interval]
   (if (co/after? start end)
     (recur (co/plus start (co/days 1)) end (concat interval [start])))))

(defn open-date [start end close-date]
  (->> close-date
   (pig/map (fn [x]
              {:toto (:toto x)
               :open-date (remove (:close-date x) (date-interval start end))}))))

I am trying to execute

(open-date start end close-date)

Were close date is the result of a pig job and i get

Caused by: java.lang.RuntimeException: Unable to resolve symbol: start in this context

I do not understant why I can't acess to that specific value.

Thanks for your help


I solve my problem by changing start and end format. From org.joda.time.DateTime to String.

by Elie at January 30, 2015 09:59 AM


Is it possble to store a counter that could reach $\lfloor \frac{N}{x}\rfloor$ using $\lceil\log_2(N+1)\rceil$ - $\lfloor\log_2 x\rfloor$ bits?

Let $x,N$ be positive integers.

I'd like to store a counter which could reach value of $\left\lfloor \frac{N}{x}\right\rfloor$ (i.e. could take any value in $0,1,\ldots,\frac{N}{x}$) using $$\lceil\log_2(N+1)\rceil - \lfloor\log_2 x\rfloor$$


Equivalently, does the following hold: $$\log\left\lfloor \frac{N}{x} + 1\right\rfloor \leq \lceil\log_2(N+1)\rceil - \lfloor\log_2 x\rfloor$$

Obviously, this holds for $x=1$, what about general $x$?

by A C at January 30, 2015 09:56 AM


Branching Boosting Algorithms

Long/Servedio showed AdaBoost/etc doesn't perform well under noisy environments, but that branching forms of boosting do. Can any point me to a list of branching boosting algorithms, or a reference for the current best-of-breed alternatives?

Much appreciated!

by Gene at January 30, 2015 09:52 AM


How to iterate twice using iterator method in scala

// iteratorFunc is Iterable[SomeClass]
val iterator1 = iteratorFunc.iterator

iterator1 foreach {

val iterator2 = iteratorFunc.iterator

iterator2 foreach {

The code inside iterator1 foreach is successfully done. But, iterator2 gives empty iterator.

Please help.

by Anish Shah at January 30, 2015 09:48 AM

Capturing all subgroups

How I can capture all repeating groups?

I wanted to have one single match for all letters separated by dash. I was expecting to see 3 groups and in each of them a letter. What is happening? Can I get all the groups?

val matcher = java.util.regex.Pattern.compile("(?:(\\w)-?)+").matcher("a-b-c")

This prints


I was expecting to get something like


by raisercostin at January 30, 2015 09:46 AM

ansible jinja2 cocatenate IP addresses

I would like to cocatenate a group of ips into a string.

example ip1:2181,ip2:2181,ip3:2181,etc

{% for host in groups['zookeeper'] %}
   {{ hostvars[host]['ansible_eth0']['ipv4']['address'] }}
{% endfor %}

I have the above code, but can't seem to quite figure out how to concatenate into a string.

searching for "Jinja2 concatenate" doesn't give me the info I need.

by Simply Seth at January 30, 2015 09:40 AM

Planet Theory

TR15-014 | Reliable Communication over Highly Connected Noisy Networks | Ran Gelles, Noga Alon, Mark Braverman, Klim Efremenko, Bernhard Haeupler

We consider the task of multiparty computation performed over networks in the presence of random noise. Given an $n$-party protocol that takes $R$ rounds assuming noiseless communication, the goal is to find a coding scheme that takes $R'$ rounds and computes the same function with high probability even when the communication is noisy, while maintaining a constant asymptotical rate, i.e., while keeping $\lim_{n,R\to\infty} R/R'$ positive. Rajagopalan and Schulman (STOC '94) were the first to consider this question, and provided a coding scheme with rate $O(1/\log (d+1))$, where $d$ is the maximal degree of connectivity in the network. While that scheme provides a constant rate coding for many practical situations, in the worst case, e.g., when the network is a complete graph, the rate is $O(1/\log n$), which tends to $0$ as $n$ tends to infinity. We revisit this question and provide an efficient coding scheme with a constant rate for the interesting case of fully connected networks. We furthermore extend the result and show that if the network has mixing time $m$, then there exists an efficient coding scheme with rate $O(1/m^3\log m)$. This implies a constant rate coding scheme for any $n$-party protocol over a network with a constant mixing time, and in particular for random graphs with $n$ vertices and degrees $n^{\Omega(1)}$.

January 30, 2015 09:38 AM


Can I run Bash scripts in FreeBSD without modifying them?

Correct me if I'm wrong:

  • "sh" script != "bash" script
  • Linux script are written in Bash
  • Bash script usually #!/bin/sh
  • In GNU/Linux, /bin/sh is Bash
  • In FreeBSD, /bin/sh is not bash, it's the true sh

So if I want to use a Linux script in FreeBSD, and I run ./ in the shell, it will run the Bash script in "sh" and not Bash, since /bin/sh in FreeBSD is not Bash.

Is there a way I could run those Bash scripts, without modifying it? So no modification to the #!/bin/sh statement in the script file to point somewhere else?

I would like to run Bash script trough Zsh, if possible. Don't want to install Bash, and since Zsh can run Bash scripts...

by user1115057 at January 30, 2015 09:35 AM

Planet Theory

TR15-013 | On Hardness of Approximating the Parameterized Clique Problem | Subhash Khot, Igor Shinkar

In the $Gap-clique(k, \frac{k}{2})$ problem, the input is an $n$-vertex graph $G$, and the goal is to decide whether $G$ contains a clique of size $k$ or contains no clique of size $\frac{k}{2}$. It is an open question in the study of fixed parameterized tractability whether the $Gap-clique(k, \frac{k}{2})$ problem is fixed parameter tractable, i.e., whether it has an algorithm that runs in time $f(k)\cdot n^\alpha$, where $f(k)$ is an arbitrary function of the parameter $k$ and the exponent $\alpha$ is a constant independent of $k$. In this paper, we give some evidence that the $Gap-clique(k, \frac{k}{2})$ problem is not fixed parameter tractable. Specifically, we define a constraint satisfaction problem, which we call $Deg-2-sat$, where the input is a system of $k'$ quadratic equations in $k'$ variables over a finite field ${\mathbb F}$ of size $n'$, and the goal is to decide whether there is a solution in ${\mathbb F}$ that satisfies all the equations simultaneously. The main result in this paper is an ``FPT-reduction" from $Deg-2-sat$ to the $Gap-clique(k, \frac{k}{2})$ problem. If one were to hypothesize that the $Deg-2-sat$ problem is not fixed parameter tractable, then our reduction would imply that the $Gap-clique(k, \frac{k}{2})$ problem is not fixed parameter tractable either. The reduction relies on the algebraic techniques used in proof of the PCP theorem.

January 30, 2015 09:35 AM


Not One, But Two LibreSSL T-Shirts Available

As reported on misc@, there are two different styles of T-shirt available for pre-order:

Hi everyone,

Some new awesome LibreSSL T-shirts are available to help fund 
developments. You can see them on

We’re running a small pre-order for about 2 weeks. If you have any 
questions please email us off list.

Yes, these are official products with funds directed back to the project.


PS: Thank you to everyone who supported us over the transition period

You can support the project and look spiffy at the same time, either by choosing hope, or, like the Editors, an orderly evacuation of the flaming wreck.

January 30, 2015 09:32 AM


how do I get sbt to use a local maven proxy repository (Nexus)?

I've got an sbt (Scala) project that currently pulls artifacts from the web. We'd like to move towards a corporate-standardized Nexus repository that would cache artifacts. From the Nexus documentation, I understand how to do that for Maven projects. But sbt obviously uses a different approach. (I understand Ivy is involved somehow, but I've never used it and don't understand how it works.)

How do I tell sbt and/or the underlying Ivy to use the corporate Nexus repository system for all dependencies? I'd like the answer to use some sort of project-level configuration file, so that new clones of our source repository will automatically use the proxy. (I.e., mucking about with per-user config files in a dot-directory is not viable.)


by Harlan at January 30, 2015 09:31 AM

Addition of two RDD[mllib.linalg.Vector]'s

i need addition of two matrices that are stored in two files.

contents of latest1.txt and latest2.txt like
1 2 3
4 5 6
7 8 9

i am reading those files like

scala> val rows = sc.textFile(“latest1.txt”).map { line => val values = line.split(‘ ‘).map(_.toDouble)
    Vectors.sparse(values.length, => (e._2, e._1)).filter(_._2 != 0.0))

scala> val r1=rows
r1: org.apache.spark.rdd.RDD[org.apache.spark.mllib.linalg.Vector] = MappedRDD[2] at map at :14

scala> val rows = sc.textFile(“latest2.txt”).map { line => val values = line.split(‘ ‘).map(_.toDouble)
    Vectors.sparse(values.length, => (e._2, e._1)).filter(_._2 != 0.0))

scala> val r2=rows
r2: org.apache.spark.rdd.RDD[org.apache.spark.mllib.linalg.Vector] = MappedRDD[2] at map at :14

i want to add r1, r2. is there any way to add this two RDD[mllib.linalg.Vector]s in Apache-Spark.

by krishna at January 30, 2015 09:29 AM

Cannot query SimpleDB using ::sdb/id using clojure rummage library

I am trying to query Amazon SimpleDB using the clojure rummage library. I am successfully able to query my database using fields other than the primary key (itemName) for instance the following query returns a list of records:

(db/query-event client '{select * from dev where (= :benchmark_id "abcd")})

However I seem to be unable to do the same thing for the primary id field, where the below query returns nil and I know that there are records in the database that match this query.

(db/query-event client '{select * from dev where (> ::sdb/id "1421284428631")})

by Michael Barton at January 30, 2015 09:24 AM


My First OpenBSD Port

Adam Wołk shares his experiences in porting the Otter web browser to OpenBSD:

[My first OpenBSD port] has just landed in the ports tree. It's been a fun ride, this post is a summary of the whole process from the perspective of a first time contributor. Note that this is not a tutorial, just my personal experiences of getting my first port accepted to the tree.

The article is a good overview of getting involved in the porting process; if you've ever been interested in how the process works, take a look!

January 30, 2015 09:16 AM


How to create a shortcut to existing annotation?

In my code I am using following annotation several times:

@JsonSerialize(using = classOf[CustomColorRGBASerializer])

To keep my code short and DRY, I would like to create a shortcut to this, something like:

class JsonSerializeARGB
  extends @JsonSerialize(using = classOf[CustomColorRGBASerializer])

which I could then use as a new @JsonSerializeARGB annotation

I can use annotation, but I do not know how to define them, therefore my attempt cetainly looks naive and obviously incorrect, but I hope it bears the meaning through.

I have read How do you define an @interface in Scala? and How to create annotations and get them in scala, but they did not help me much, as I do not want to create a brand new annotation, rather "subclass" existing annotation. Can this be done?

If there is no Scala solution, can something like this be done in Java? (The Jackson annotations I am working with are defined in Java anyway).

by Suma at January 30, 2015 08:59 AM

Implicit conversion reversing a tuple

When working with scala.swing.BorderPanel, one can add components as this:

layout += new ToolBar {
  // toolbar layout here
} -> South

I would prefer a reversed order, like this:

layout += South -> new ToolBar {
  // toolbar layout here

I tried to define an implicit conversion swaping the pair:

implicit def layoutPair(p:(Component,Constraints)):(Constraints,Component) = p.swap

The compiler does not recognize it, and still shows me a type mismatch:

found: (Position.Value,AwtComponent), required: (AwtComponent, Position.Value).

Can an implicit reversal like this be achived? If not, is there some alternative, which would not add any closing brace or parenthesis after the Toolbar definition?

by Suma at January 30, 2015 08:57 AM

Ansible idempotent MySQL installation Playbook

I want to setup a MySQL server on AWS, using Ansible for the configuration management. I am using the default AMI from Amazon (ami-3275ee5b), which uses yum for package management.

When the Playbook below is executed, all goes well. But when I run it for a second time, the task Configure the root credentials fails, because the old password of MySQL doesn't match anymore, since it has been updated the last time I ran this Playbook.

This makes the Playbook non-idempotent, which I don't like. I want to be able to run the Playbook as many times as I want.

- hosts: staging_mysql
  user: ec2-user
  sudo: yes

    - name: Install MySQL
      action: yum name=$item
        - MySQL-python
        - mysql
        - mysql-server

    - name: Start the MySQL service
      action: service name=mysqld state=started

    - name: Configure the root credentials
      action: command mysqladmin -u root -p $mysql_root_password

What would be the best way to solve this, which means make the Playbook idempotent? Thanks in advance!

by Voles at January 30, 2015 08:20 AM


Mutations as a crossover by product

Let's say I'm writing a GA to find an optimal path to travel from point $A$ to point $B$. Genotypes are a list of directions (north, south, east, west) to follow.

So a genotype "NENWEE" will move north once, east once, then north again, west once, and finally east twice.

The directions are encoded as follows:

N : 00
E : 01
W : 10
S : 11

Our first genotype, "NENWEE" (let's call it $P$), will thus be encoded as follows: $00\,01\,00\,10\,01\,01$

Let $Q$ be a second genotype, say "EENEWW", which is encoded as follows: $01\,01\,00\,01\,10\,10$

Now let's do a one-point crossover operation on genotype $Q$, from $P$. The randomly-chosen crossover point is between the $9$-th and $10$-th bit, so bits $10$, $11$ and $12$ from genotype $P$ will replace those same bits from genotype $Q$. Let's call the resulting genotype $Q'$.

P  :  00 01 00 10 01 01
Q  :  01 01 00 01 10 10
Q' :  01 01 00 01 11 01

After decoding $Q'$ we find that the result is "EENESE". However neither $P$ nor $Q$ contained direction south.

My question is, do crossover operators imply a certain degree of mutation by definition?

by Philippe Olivier at January 30, 2015 08:16 AM

How to show that L = L(G)?

Specifying formal languages by giving formal grammars is a frequent task: we need grammars not only to describe languages, but also to parse them, or even do proper science. In all cases, it is important that the grammar at hand is correct, that is generates exactly the desired words.

We can often argue on a high-level why the grammar is an adequate representation of the desired language, omitting a formal proof. But what if we are in doubt or need a formal proof for some reason? What are techniques we can apply?

This is supposed to become a reference question. Therefore, please take care to give general, didactically presented answers that are illustrated by at least one example but nonetheless cover many situations. Thanks!

by Raphael at January 30, 2015 08:14 AM



ScalaFx Bind ImageView to custom Enumeration

I just started to write scalafx application and have a question about bindings.

I have an Enumeration with connection status in my presenter class and I want to select appropriate icon in label in view class. I basically can create binding in javafx way and set converter which will select appropriate ImageView every time status changes, but is it possible to do in ScalaFX way?

I looked many scalafx examples, but still can't find anything like this.

Here is some code:

package view


class MainWindowPresenter {
  object DatabaseState extends  Enumeration {
    type DatabaseState = Value

  val login = new ObjectProperty[String](this, "login", "awesomeloginname")
  val state = new ObjectProperty[DatabaseState.DatabaseState](this, "state", DatabaseState.ERROR)

View class:

package view

import java.util.concurrent.Callable
import javafx.beans.binding.ObjectBinding

import collection.immutable.HashMap

import javafx.scene.control.SeparatorMenuItem

import scala.util.Random
import scalafx.geometry.{Orientation, Insets, Pos}
import scalafx.scene.control._
import scalafx.scene.image.{ImageView, Image}
import scalafx.scene.layout._

import scalafx.Includes._

class MainWindowView extends BorderPane {
  val model = new MainWindowPresenter

  top = new HBox {
    content = List(
      new Label() {
       graphic <== //somehow select imageview depending on model.state

  private def imageFromResource(name : String) =
    new ImageView(new Image(getClass.getClassLoader.getResourceAsStream(name)))

Thanks in advance and sorry for grammar mistakes, if any - English isn't my native.

by rtgbnm at January 30, 2015 07:58 AM


IB API grayed out for paper trading account

I played with the IB Trader Workstation API using their demo account, and it went well. Next, I moved to a paper trading account. Now, my program using the API cannot connect to IB.

In order to enable the API, the documentation in Trader Workstation API Settings says:

    In TWS, select the Edit menu, then select Global Configuration.
    Click API in the left pane, and select Settings.
    Configure the API settings as required. These are described below.

However, the API item in the left pane is grayed out, so I cannot change the settings. Can you please tell me what am I doing wrong?

by Yuval F at January 30, 2015 07:52 AM


Akka actor cannot send back the message

say, I have an Actor whose receive function likes

def receive = {
  case Message =>
    val doFuture: Future[String] = doSomething()

    doFuture onSuccess {
      case doResult =>
        //////////// Here is the problem !! /////////////
        // --> here fail. Seems sender cannot send back the result to the caller
        sender ! doResult

    doFuture onFailure {
      // handle exception

why the sender cannot send back message any more ?

by hliu at January 30, 2015 07:47 AM

Pattern match abstact type

trait Aggregate {
    type Command

class AggregateHandler(a: Aggregate) {
   def receiveCommand: Receive = {
     case a.Command => ???

How can I pattern match on a.Command? I am getting; abstract type pattern AggregateHandler.this.a.Command is unchecked since it is eliminated by erasure and The outer reference in this type test cannot be checked at run time.

How can I workaround this?

by mark_dj at January 30, 2015 07:32 AM

Golang zmq binding, ZMQ4, returns package error not finding file zmq.h

I am trying to include ZMQ sockets in a Go app but both zmq4 and gozmq (the referred ZMQ binding libraries for Go) are giving me problems. I would like to understand why zmq4 specifically isn't importable on my system.

I am running a Windows 8 system and I used the windows installer from the ZMQ website for version 4.0.3. I am primarily concerned about getting zmq4 set up and here is the result of my "go get" query on the github library's location:

> go get
polling.go:4:17: fatal error: zmq.h: No such file or directory
compilation terminated.

This issue is not alleviated by cloning the Github repository - the error remains the same.

I know the issue has to do with the C library zmq.h that is located in the "include" folder of my ZMQ installation, but whether the dependency is held up by a pathing issue or an external tool issue is a mystery to me.

A similar error has come up in regards to node.js and is the solution I see others referred to, outside of node scripting, but it was unsuccessful in my case.

I've so far included the path to the "include" folder in my PATH environment variable and previously placed zmq.h inside of the zmq4 top-level folder. I don't have much of an arsenal otherwise to understand this problem because I am new to C and C-importing packages in Go

by user2628946 at January 30, 2015 07:27 AM

Is there any Ansible remote client for control machine?

Ansible unlike chef and puppet uses agent less run . I would like to know is there any ansible remote client so that we can connect to fleet of ansible control machines to execute ansible playbooks on their respective targets . I am looking for a command line cliient similar to following

ansible-execute hostname_of_control_machine username_of_control_machine password_of_control_machine inventory_file playbook_name

Please suggest if any ?

by samvaran kashyap at January 30, 2015 07:01 AM



Which sports are generally the best for trading on betting exchanges for a profit?

I am looking at trading bets on tennis, football and horse racing in particular as these appear to have the most liquidity.

How much background research and how much trial and error is generally needed to trade confidently with some knowledge of how the prices will move? Also how big of a bankroll do you suggest starting with?

The exchanges I am looking into currently are and, however I have read that Betfair has a 60% premium charge for long term winning accounts, does this generally affect traders if dealing with multiple trades daily? I am hoping to get it to a level where I make a consistent ~£100-200/daily profit to supplement my income and continue to work my way up from there.

Any advice is truly appreciated, thank you in advance!

by Andrew WU at January 30, 2015 06:47 AM


pkgsrc package install message

When installing gnupg2 on Mac OS X 10.10.2 using pkgsrc I got a message

$ pkgin install gnupg2

The following files should be created for dirmngr-1.1.0nb8:

    /etc/rc.d/dirmngr (m=0755)



What do I exactly need to do here? Copying /usr/pkg/share/examples/rc.d/dirmngr to /etc/rc.d/dirmngr and chmod to 755?

by qazwsx at January 30, 2015 06:43 AM




Ansible run remote exe file

How to execute an exe file on a remote Windows machine with Ansible? I used raw, script and command modules, but the thing is, the exe file is running in a different session, and cannot see the application UI in the desktop of the remote machine.

2nd issue is the ansible playbook doesn't move forward after the exe execution.

Can we run the exe in the active desktop session?

by Anuradha Fernando at January 30, 2015 05:59 AM





zmq_recv failure on REP

The following code belongs to zmq_requester.

void *context = zmq_ctx_new();
void *requester = zmq_socket(context, ZMQ_REQ);
zmq_connect (requester, "tcp://localhost:6668");
    char buffer[6] = "";

This one is zmq_reply

void *context = zmq_ctx_new();
void *responder = zmq_socket(context, ZMQ_REP);
int rc = zmq_bind(responder,"tcp://*:6668");
assert (rc == 0);
    char buf[4] = "";
    if(zmq_recv(responder,buf,3,ZMQ_NOBLOCK) != 0) {
        cout<<"error - "<<zmq_errno()<<endl; //recv fails here

Whenever i execute the above code, zmq_recv() on ZMQ_REP fails. The format i am using for ZMQ_RECV was working fine in other codes. But only in ZMQ_REP it fails. Can anyone tell me where i am doing wrong?

by Kumar at January 30, 2015 04:56 AM


Pricing an american style option on a bond future

what is the good way to pricing american option on bond future? From bonk fixed income securities 3rd by Tuckman, I understand how to pricing European option on bond future, but I still have no clue how to pricing american option.

by galaxyan at January 30, 2015 04:32 AM


That Sly MINIX | BSD Now 74

Allan Jude of BSDNow interviews Andrew Tanenbaum about the history and current state of MINIX. The interview begins at 21:15.


by jeremy at January 30, 2015 04:14 AM


Exercise on Scheduling (need a confirmation on the solution) [on hold]

I have been assigned to do the following exercise for my real time systems course:

Consider the set of jobs in the image. Suppose that the jobs have identical execution time.

$a)$ What maximum execution time can the jobs have and still can be feasibly scheduled on one processor? Explain your answer.

$b)$ Suppose that the release times of $J_1$ and $J_2$ are jittery. The release time of $J_1$ can be as early as $0$ and as late as $3$, and the release time of $J_2$ can be as late as $1$. How can you take into account this variation when you want to determine whether the jobs can all meet their deadlines?


Note: The graph is a precedence graph (for example $J_3$ starts only after $J_1$ and $J_2$) an the values represent (Arrival Time, Deadline).


$a)$ Working with a precedence graph is not nice so i have calculated the effective release time $(ERT)$ and deadline $(EDL)$ for each task and are defined as follow:

$ERT = \mathsf{max} \{$job’s release time, ert of all its predecessors$\}$

$EDL = \mathsf{min} \{$job’s given deadline; edl of all its successors$\}$

The values are: $$J_1(2,8), J_2(0,7), J_3(2,8), J_4(4,9), J_5(2,8), J_6(4,20), J_7(6,21)$$

So supposing all the jobs have the same execution time.

I can arrange $J_2$ on $[0-2]$ under the supposition that $t \leq 2$.

Now during the period $[2-9]$ i have to arrange $J_1,J_3,J_4,J_5$ and so $$t \leq \frac{9-2}{4} = 1.75$$ (Strict constraint).

For the remaining jobs during $[9-21]$ there is a lot of time and so $$t\leq \frac{ (21-9)}{2} = 6$$

Putting all constraints together $\rightarrow$ $t=1.75$ is the max time.

$b)$ If $J_1$ release time can be delayed until $3$ and $J_2$ release time can be as late as $1$ I can recalculate the $ERT$ of all jobs using as release time of $J_1$ and $J2$ their max arrival time.

So new values are: $J_1(3,8), J_2(1,7), J_3(2,8), J_4(4,9), J_5(3,8), J_6(4,20), J_7(6,21)$

I can arrange $J_2$ during $[1-3]$ supposing $t \leq 2$

I can arrange $J_1,J_3,J_4,J_5$ during $[3-9]$ so $$t\leq\frac{(9-3)}{4}=1.5$$

I can arrange the remaining jobs during $[9-21]$ so $$t\leq\frac{(21-9)}{2}=6$$

Putting all constraints together $\rightarrow$ All jobs can be definitely scheduled if $t\leq 1.5$.

I would like to know if my solution is correct. Thanks all.

by Frastolo at January 30, 2015 04:13 AM

$k$-query oracle Turing machine (Sipser 9.21)

Question: A $k$-query oracle Turing machine is an oracle Turing machine that is permitted to make at most $k$ queries on each input. A $k$-query oracle Turing machine $M$ with an oracle for $A$ is written $M^{A,k}$. Define $P^{A,k}$ to be the collection of languages that are decidable by polynomial time $k$-query oracle Turing machines with an oracle for $A$.

a) Show that $NP \cup coNP \subseteq P^{SAT,1}$.

b) Assume that $NP \neq coNP$. Show that $NP \cup coNP \subsetneq P^{SAT,1}$.

Here, a) is trivial. But I'm stuck on b). Thank you for any help!

by Qinxuan at January 30, 2015 04:09 AM


I Wrote a (Somewhat) Useful Elisp Macro!

I'm very new to lisp (I've been doing some Clojure for a few days) and relatively new to Emacs (nearly a year).

I came from Vim and I got sick of having to define evil-mode keys to call the same functions in different modes. For instance: since I'm a fan of the scratch buffer, I want all my regular emacs-lisp-mode keybindings in that buffer too, but the programmer inside me recoiled at the thought of duplicating all the keybindings for both modes (suppose I want to change one!). What I really wanted was something to work like:

(evil-define-multiple '(lisp-interaction-mode-map emacs-lisp-mode-map) '((normal (kbd "C-c C-a") success) (insert (kbd "C-c C-a") success))) 

"I can make that, how hard could it be?" I said to myself 4 hours ago. And I finally did!

(defmacro make-mapping (mode-map bindings) `(evil-define-key (quote ,(car bindings)) ;; evil state mode ,mode-map ;; mode map ,(car (cdr bindings)) ;; keybinding (quote ,(car (cdr (cdr bindings)))))) ;; function call (defun evil-define-multiple (mode-maps bindings) (mapcar (lambda (mode) (mapcar (lambda (binding) (eval `(make-mapping ,mode ,binding))) bindings)) mode-maps)) (defun success () "a testing function" (interactive) (message "success")) 

This is the somewhat obscenely small amount of code that gave me just what I wanted.

I'd love to have some feedback and suggestions for improvements. This is the first emacs lisp I've written that wasn't basic defining new keys or adding functions to hooks.

submitted by the_whalerus
[link] [12 comments]

January 30, 2015 04:04 AM



== for case class and "non-case" class in Scala

I am studying Scala and ran into the following puzzle.

I can define the following case classes:

abstract class Expr
case class Number(n: Int) extends Expr

When I create two instances from the class Number and compare them

val x1 = Number(1)
val x2 = Number(1)
x1 == x2

I have the following result:

x1: Number = Number(1)

x2: Number = Number(1)

res0: Boolean = true

So x1 and x2 are the same.

However, if I drop the case modifier in the Number class definition, i.e.

abstract class Expr
class Number(n: Int) extends Expr

and then compare two instances from the Number class in the same way

val x1 = new Number(1)
val x2 = new Number(1)
x1 == x2

I have the following output:

x1: Number = Number@1175e2db

x2: Number = Number@61064425

res0: Boolean = false

It says that this time x1 and x2 are different.

Could you tell me why is this? What difference does case make in terms of comparing two instances?

Thanks, Pan

by Pan Chao at January 30, 2015 03:53 AM


Wes Felter


ArityException with arity of (-1)

There is a macro and I've been trying to figure this error out ArityException Wrong number of args (-1) passed to: user/bar clojure.lang.Compiler.macroexpand1 ( I tried to debug the but I'm not understanding why there's an arity of (-1) when I try to expand the macro.

I'm running the following code.

(defn foo [x] (println x))

(defmacro bar [exp]
  (let [length (count exp)]
      (= length 0) '()
      (= length 1) exp
      :else (let [[head & tail] (vec exp)
                  [new-tail] (bar tail)]
        `(trap (~head ~@new-tail))))))

(macroexpand '(bar (inc 1)))

Anyone have any idea as to what's going on with the arity of (-1)?

by Petesta at January 30, 2015 01:44 AM


Planet Theory

Finding Connected Dense $k$-Subgraphs

Authors: Xujin Chen, Xiaodong Hu, Changjun Wang
Download: PDF
Abstract: Given a connected graph $G$ on $n$ vertices and a positive integer $k\le n$, a subgraph of $G$ on $k$ vertices is called a $k$-subgraph in $G$. We design combinatorial approximation algorithms for finding a connected $k$-subgraph in $G$ such that its density is at least a factor $\Omega(\max\{n^{-2/5},k^2/n^2\})$ of the density of the densest $k$-subgraph in $G$ (which is not necessarily connected). These particularly provide the first non-trivial approximations for the densest connected $k$-subgraph problem on general graphs.

January 30, 2015 01:40 AM

Counting Homomorphisms to Square-Free Graphs, Modulo 2

Authors: Andreas Göbel, Leslie Ann Goldberg, David Richerby
Download: PDF
Abstract: We study the problem HomsTo$H$ of counting, modulo 2, the homomorphisms from an input graph to a fixed undirected graph $H$. A characteristic feature of modular counting is that cancellations make wider classes of instances tractable than is the case for exact (non-modular) counting, so subtle dichotomy theorems can arise. We show the following dichotomy: for any $H$ that contains no 4-cycles, HomsTo$H$ is either in polynomial time or is $\oplus P$-complete. This confirms a conjecture of Faben and Jerrum that was previously only known to hold for trees and for a restricted class of treewidth-2 graphs called cactus graphs. We confirm the conjecture for a rich class of graphs including graphs of unbounded treewidth. In particular, we focus on square-free graphs, which are graphs without 4-cycles. These graphs arise frequently in combinatorics, for example in connection with the strong perfect graph theorem and in certain graph algorithms. Previous dichotomy theorems required the graph to be tree-like so that tree-like decompositions could be exploited in the proof. We prove the conjecture for a much richer class of graphs by adopting a much more general approach.

January 30, 2015 01:40 AM


Interview at The Setup: The Stuff I Use

it's-a me

I was interviewed by the site The Setup! It was pretty fun. The premise of the site is asking people “what do you use?” Hardware, software, processes, etc.

I’ll also list mental processes under “software”. All the apps and tricks in the world are just cargo cult trappings if you can’t control the way you think. This is really hard for me! And it’s an ongoing learning process as I struggle to navigate the canyons that streams of habit have carved into the workings of my mind. But the few things I’ve found that work really well (when I can stick to them) are…

Read the whole interview at The Setup!

by David Malki at January 30, 2015 01:35 AM


Replicating portfolio and risk-neutral pricing for interest rate options

For equity options, the pricing of options depends on the existence of a replicating portfolio, so you can price the option as the constituents of that replicating portfolio. However, I am not seeing how the same analysis can be applied to value interest rate options. Does the concept of replication apply to interest rate derivatives? If so, what would a replicating portfolio look like?

by ezbentley at January 30, 2015 01:30 AM

arXiv Networking and Internet Architecture

Analytical Model for IEEE 802.15.4 Multi-Hop Networks with Improved Handling of Acknowledgements and Retransmissions. (arXiv:1501.07594v1 [cs.NI])

The IEEE 802.15.4 standard allows for the deployment of cost-effective and energy-efficient multi-hop networks. This document features an in-depth presentation of an analytical model for assessing the performance of such networks. It considers a generic, static topology with Poisson distributed data-collection as well as data-dissemination traffic. The unslotted CSMA/CA MAC layer of IEEE 802.15.4 is closely modeled as well as an enhanced model of the neighborhood allows for consideration of collisions of packets including interferences with acknowledgements. The hidden node problem is taken into account as well as a formerly disregarded effect of repeated collisions of retransmissions. The model has been shown to be suitable to estimate the capacity of large-scale multi-hop networks.

by <a href="">Florian Meier</a>, <a href="">Volker Turau</a> at January 30, 2015 01:30 AM

FAIR: Forwarding Accountability for Internet Reputability. (arXiv:1501.07586v1 [cs.NI])

This paper presents FAIR, a forwarding accountability mechanism that incentivizes ISPs to apply stricter security policies to their customers. The Autonomous System (AS) of the receiver specifies a traffic profile that the sender AS should adhere to. Intermediate ASes on the path mark packets. In case of traffic profile violations, the marked packets are used to prove misbehavior. FAIR introduces minimal bandwidth overhead and requires no per-packet and no per-flow state at border routers. We describe integration with IPv4/IPv6 and demonstrate a software switch running on commodity hardware that can switch at a line rate of 120 Gbps, and can forward 140M minimum-sized packets per second, limited by the hardware I/O subsystem. Furthermore, FAIR provides incentives for incremental adoption to edge ASes and ISPs. Moreover, this paper proposes a "suspicious bit" for packet headers - an application that builds on top of FAIR's proofs of misbehavior and flags packets to warn other entities in the network.

by <a href="">Christos Pappas</a>, <a href="">Raphael M. Reischuk</a>, <a href="">Adrian Perrig</a> at January 30, 2015 01:30 AM

Quantum Information splitting using a pair of {\it GHZ} states. (arXiv:1501.07529v1 [quant-ph])

We describe a protocol for quantum information splitting (QIS) of a restricted class of three-qubit states among three parties Alice, Bob and Charlie, using a pair of GHZ states as the quantum channel. There are two different forms of this three-qubit state that is used for QIS depending on the distribution of the particles among the three parties. There is also a special type of four-qubit state that can be used for QIS using the above channel. We explicitly construct the quantum channel, Alice's measurement basis and the analytic form of the unitary operations required by the receiver for such a purpose.

by <a href="">Kaushik Nandi</a>, <a href="">Goutam Paul</a> at January 30, 2015 01:30 AM

Throughput of a Cognitive Radio Network under Congestion Constraints: A Network-Level Study. (arXiv:1501.07510v1 [cs.NI])

In this paper we analyze a cognitive radio network with one primary and one secondary transmitter, in which the primary transmitter has bursty arrivals while the secondary node is assumed to be saturated (i.e. always has a packet waiting to be transmitted). The secondary node transmits in a cognitive way such that it does not impede the performance of the primary node. We assume that the receivers have multipacket reception (MPR) capabilities and that the secondary node can take advantage of the MPR capability by transmitting simultaneously with the primary under certain conditions. We obtain analytical expressions for the stationary distribution of the primary node queue and we also provide conditions for its stability. Finally, we provide expressions for the aggregate throughput of the network as well as for the throughput at the secondary node.

by <a href="">Nikolaos Pappas</a>, <a href="">Marios Kountouris</a> at January 30, 2015 01:30 AM

On-line list colouring of random graphs. (arXiv:1501.07469v1 [math.CO])

In this paper, the on-line list colouring of binomial random graphs G(n,p) is studied. We show that the on-line choice number of G(n,p) is asymptotically almost surely asymptotic to the chromatic number of G(n,p), provided that the average degree d=p(n-1) tends to infinity faster than (log log n)^1/3(log n)^2n^(2/3). For sparser graphs, we are slightly less successful; we show that if d>(log n)^(2+epsilon) for some epsilon>0, then the on-line choice number is larger than the chromatic number by at most a multiplicative factor of C, where C in [2,4], depending on the range of d. Also, for d=O(1), the on-line choice number is by at most a multiplicative constant factor larger than the chromatic number.

by <a href="">Alan Frieze</a>, <a href="">Dieter Mitsche</a>, <a href="">Xavier P&#xe9;rez-Gim&#xe9;nez</a>, <a href="">Pawe&#x142; Pra&#x142;at</a> at January 30, 2015 01:30 AM

Abelian bordered factors and periodicity. (arXiv:1501.07464v1 [cs.FL])

A finite word u is said to be bordered if u has a proper prefix which is also a suffix of u, and unbordered otherwise. Ehrenfeucht and Silberger proved that an infinite word is purely periodic if and only if it contains only finitely many unbordered factors. We are interested in abelian and weak abelian analogues of this result; namely, we investigate the following question(s): Let w be an infinite word such that all sufficiently long factors are (weakly) abelian bordered; is w (weakly) abelian periodic? In the process we answer a question of Avgustinovich et al. concerning the abelian critical factorization theorem.

by <a href="">Emilie Charlier</a>, <a href="">Tero Harju</a>, <a href="">Svetlana Puzynina</a>, <a href="">Luca Zamboni</a> at January 30, 2015 01:30 AM

Simple greedy 2-approximation algorithm for the maximum genus of a graph. (arXiv:1501.07460v1 [math.CO])

The maximum genus $\gamma_M(G)$ of a graph G is the largest genus of an orientable surface into which G has a cellular embedding. Combinatorially, it coincides with the maximum number of disjoint pairs of adjacent edges of G whose removal results in a connected spanning subgraph of G. In this paper we prove that removing pairs of adjacent edges from G arbitrarily while retaining connectedness leads to at least $\gamma_M(G)/2$ pairs of edges removed. This allows us to describe a greedy algorithm for the maximum genus of a graph; our algorithm returns an integer k such that $\gamma_M(G)/2\le k \le \gamma_M(G)$, providing a simple method to efficiently approximate maximum genus. As a consequence of our approach we obtain a 2-approximate counterpart of Xuong's combinatorial characterisation of maximum genus.

by <a href="">Michal Kotrbcik</a>, <a href="">Martin Skoviera</a> at January 30, 2015 01:30 AM

Convergence law for hyper-graphs with prescribed degree sequences. (arXiv:1501.07429v1 [cs.LO])

We view hyper-graphs as incidence graphs, i.e. bipartite graphs with a set of nodes representing vertices and a set of nodes representing hyper-edges, with two nodes being adjacent if the corresponding vertex belongs to the corresponding hyper-edge. It defines a random hyper-multigraph specified by two distributions, one for the degrees of the vertices, and one for the sizes of the hyper-edges. We develop the logical analysis of this framework and first prove a convergence law for first-order logic, then characterise the limit first-order theories defined by a wide class of degree distributions. Convergence laws of other models follow, and in particular for the classical Erd\H{o}s-R\'enyi graphs and $k$-uniform hyper-graphs.

by <a href="">Nans Lefebvre</a> at January 30, 2015 01:30 AM

Liquidity costs: a new numerical methodology and an empirical study. (arXiv:1501.07404v1 [q-fin.CP])

We consider rate swaps which pay a fixed rate against a floating rate in presence of bid-ask spread costs. Even for simple models of bid-ask spread costs, there is no explicit strategy optimizing an expected function of the hedging error. We here propose an efficient algorithm based on the stochastic gradient method to compute an approximate optimal strategy without solving a stochastic control problem. We validate our algorithm by numerical experiments. We also develop several variants of the algorithm and discuss their performances in terms of the numerical parameters and the liquidity cost.

by <a href="">Christophe Michel</a>, <a href="">Victor Reutenauer</a>, <a href="">Denis Talay</a>, <a href="">Etienne Tanr&#xe9;</a> at January 30, 2015 01:30 AM

Valuation Algorithms for Structural Models of Financial Interconnectedness. (arXiv:1501.07402v1 [q-fin.CP])

Much research in systemic risk is focused on default contagion. While this demands an understanding of valuation, fewer articles specifically deal with the existence, the uniqueness, and the computation of equilibrium prices in structural models of interconnected financial systems. However, beyond contagion research, these topics are also essential for risk-neutral pricing. In this article, we therefore study and compare valuation algorithms in the standard model of debt and equity cross-ownership which has crystallized in the work of several authors over the past one and a half decades. Since known algorithms have potentially infinite runtime, we develop a class of new algorithms, which find exact solutions in finitely many calculation steps. A simulation study for a range of financial system designs allows us to derive conclusions about the efficiency of different numerical methods under different system parameters.

by <a href="">Johannes Hain</a>, <a href="">Tom Fischer</a> at January 30, 2015 01:30 AM

Resilience for Exascale Enabled Multigrid Methods. (arXiv:1501.07400v1 [cs.CE])

With the increasing number of components and further miniaturization the mean time between faults in supercomputers will decrease. System level fault tolerance techniques are expensive and cost energy, since they are often based on redundancy. Also classical check-point-restart techniques reach their limits when the time for storing the system state to backup memory becomes excessive. Therefore, algorithm-based fault tolerance mechanisms can become an attractive alternative. This article investigates the solution process for elliptic partial differential equations that are discretized by finite elements. Faults that occur in the parallel geometric multigrid solver are studied in various model scenarios. In a standard domain partitioning approach, the impact of a failure of a core or a node will affect one or several subdomains. Different strategies are developed to compensate the effect of such a failure algorithmically. The recovery is achieved by solving a local subproblem with Dirichlet boundary conditions using local multigrid cycling algorithms. Additionally, we propose a superman strategy where extra compute power is employed to minimize the time of the recovery process.

by <a href="">Markus Huber</a>, <a href="">Bj&#xf6;rn Gmeiner</a>, <a href="">Ulrich R&#xfc;de</a>, <a href="">Barbara Wohlmuth</a> at January 30, 2015 01:30 AM

Coordination Games on Graphs. (arXiv:1501.07388v1 [cs.GT])

We introduce natural strategic games on graphs, which capture the idea of coordination in a local setting. We show that these games have an exact potential and have strong equilibria when the graph is a pseudoforest. We also exhibit some other classes of graphs for which a strong equilibrium exists. However, in general strong equilibria do not need to exist. Further, we study the (strong) price of stability and anarchy. Finally, we consider the problems of computing strong equilibria and of determining whether a joint strategy is a strong equilibrium.

by <a href="">Krzysztof R. Apt</a>, <a href="">Mona Rahn</a>, <a href="">Guido Schaefer</a>, <a href="">Sunil Simon</a> at January 30, 2015 01:30 AM

Hardness of Virtual Network Embedding with Replica Selection. (arXiv:1501.07379v1 [cs.DC])

Efficient embedding virtual clusters in physical network is a challenging problem. In this paper we consider a scenario where physical network has a structure of a balanced tree. This assumption is justified by many real- world implementations of datacenters. We consider an extension to virtual cluster embedding by introducing replication among data chunks. In many real-world applications, data is stored in distributed and redundant way. This assumption introduces additional hardness in deciding what replica to process. By reduction from classical NP-complete problem of Boolean Satisfia- bility, we show limits of optimality of embedding. Our result holds even in trees of edge height bounded by three. Also, we show that limiting repli- cation factor to two replicas per chunk type does not make the problem simpler.

by <a href="">Carlo Fuerst</a>, <a href="">Maciej Pacut</a>, <a href="">Stefan Schmid</a> at January 30, 2015 01:30 AM

A Cross-Layer Approach for Video Delivery over Wireless Video Sensor Networks. (arXiv:1501.07362v1 [cs.NI])

In this paper, we propose a novel cross-layer ap-proach for video delivery over Wireless Video Sensor Networks (WVSN)s. We adopt an energy efficient and adaptive video compression scheme dedicated to the WVSNs, based on the H.264/AVC video compression standard. The encoder operates using two modes. In the first mode, the nodes capture the scene following a low frame rate. When an event is detected, the encoder switches to the second mode with a higher frame rate and outputs two different types of macroblocks, referring to the region of interest and the background respectively. Furthermore, we propose an Energy and Queue Buffer Size Aware MMSPEED-based protocol for reliably and energy efficiently routing both regions towards the destination. Simulations results prove that the proposed approach is energy efficient and delivers good quality video streams. In addition, the proposed routing protocol EQBSA-MMSPEED outperforms its predecessors, the QBSA-MMSPEED and the MMSPEED, providing 33% of lifetime extension and 3 dBs of video quality enhancement.

by <a href="">Othmane Alaoui-Fdili</a> (IEMN, GSCM-LRIT), <a href="">Patrick CORLAY</a> (IEMN), <a href="">Youssef Fakhri</a> (GSCM-LRIT), <a href="">Fran&#xe7;ois-Xavier Coudoux</a> (IEMN), <a href="">Driss Aboutajdine</a> (GSCM-LRIT) at January 30, 2015 01:30 AM

Performance Tuning of an Open-Source Parallel 3-D FFT Package OpenFFT. (arXiv:1501.07350v1 [cs.MS])

The fast Fourier transform (FFT) is a primitive kernel in numerous fields of science and engineering. OpenFFT is an open-source parallel package for 3-D FFTs, built on a communication-optimal domain decomposition method for achieving minimal volume of communication. In this paper, we analyze, model, and tune the performance of OpenFFT, paying a particular attention to tuning of communication that dominates the run time of large-scale calculations. We first analyze its performance on different machines for a thorough understanding of the behaviors of the package and machines. We then build a performance model of OpenFFT on the machines, dividing it into computation and communication with a modeling of network overhead. Based on the performance analysis, we develop six communication methods for performing communication with the aim of covering varied calculation scales on a wide variety of computational platforms. OpenFFT is therefore augmented with an auto-tuning of communication to select the best method in run time depending on their performance. Numerical results demonstrate that the optimized OpenFFT is able to deliver good performance in comparison with other state-of-the-art packages at different computational scales on a number of parallel machines. The performance model is also useful for performance predictions and understanding.

by <a href="">Truong Vinh Truong Duy</a>, <a href="">Taisuke Ozaki</a> at January 30, 2015 01:30 AM

Eavesdropping in Semiquantum Key Distribution Protocol. (arXiv:1205.2807v2 [quant-ph] UPDATED)

In semiquantum key-distribution (Boyer et al.) Alice has the same capability as in BB84 protocol, but Bob can measure and prepare qubits only in $\{|0\rangle, |1\rangle\}$ basis and reflect any other qubit. We study an eavesdropping strategy on this scheme that listens to the channel in both the directions. With the same level of disturbance induced in the channel, Eve can extract more information using our two-way strategy than what can be obtained by the direct application of one-way eavesdropping in BB84.

by <a href="">Arpita Maitra</a>, <a href="">Goutam Paul</a> at January 30, 2015 01:30 AM



Play - Scala Template, access list element

I have two arraylists, Tweets: ArrayList[TweetObject] and

cluster_sizes: ArrayList[Integer]

What i want to do is this but i couldn't find a way to do it, I don't want to mess with the TweetObject class to add what is in the cluster_sizes array.

@for((tweet,index) <- Tweets.zipWithIndex){
    @form(action = routes.Application.clean_and_move_on()){

Is there any way to access this list like @cluster_sizes[@index] ?

by Berkay Dincer at January 30, 2015 01:15 AM


Constructing Volatility Smile from American Options

My question is about best practices for reconstructing volatility smiles for a fixed tenor from American option data. For simplicity/liquidity, I am currently considering options on SPY. I am currently having difficulty merging together information from calls and puts to construct the entire smile in a unified way. The resources I have found first recommend to construct the surface from the out-of-the-money options of both types, so I am selecting an ATM strike, solving for the implied repo rate at that strike, and then constructing the vol surface by calculating implied volatilities from end of day mid prices on those out of the money options using that repo rate. The problem with this is that the skews from the call and put sections do not line up creating a kink at the strike at which I join the surface. This kink causes a load of arbitrage violations, see the image below.

I am wondering what is the standard way to correct for this? One thing I can think of that may help would be to calculate an implied repo for each shared strike, but then we are digging into some in-the-money-options for which put-call parity has less reason to hold. On the same note is it even ok to back out implied repos from ATM American option prices given that put-call parity technically doesn't hold for them? Is there an alternative procedure for getting the appropriate rate?

Note, that I am currently using European BS to derive the implied volatilities but for OTM options I would hope the early exercise premium would be small and not be affecting the skew this much. Note: I have implemented the BAW American option approximation formula separately and the skews appear to match worse (but this may be a bug!).

My question is somewhat related to this question although it is more of an extension.

Below is an example of the difference in skew for SPY options at the March 2015 regular expiry (puts are blue, calls are green). Note the kink at strike ~205.

vol smile example

by Mark at January 30, 2015 01:14 AM


Why is the most probable assignment for all variables in MRFs called MAP assignment?

I am new to graphical model, especially Markov Random Fields. I have a question about MAP assignment. Let say we have the graph structure and all the potential functions. MAP assignment in MRFs is defined as the most probable assignment for all variables (no given evidence in this case). My question is why it is called MAP (why not MLE or anything else). What is the connection between them? I know MAP and MLE share a lot in common except for one thing that MAP uses the prior distribution of the parameters. But I cannot figure out the connection between the most probable assignment in MRF and MAP.

by Khoi Hoang at January 30, 2015 01:08 AM

Planet Clojure

Reinventing OOP with Clojure

From books all we know that main principles of OOP is polymorphism and encapsulation, but other meaning is that the significant aspect of OOP is a message passing. And in Clojure we have cool library for dealing with message – core.async. So we can build simple "object" with it, and we can use core.match for "parsing" messages in this "object". Yep, there will be something like Erlang actors:

(require '[clojure.core.async :refer [go go-loop chan <! >! >!! <!!]])
(require '[clojure.core.match :refer [match]])

(def dog
  (let [messages (chan)]
    (go-loop []
      (match (<! messages)
        [:bark!] (println "Bark! Bark!")
        [:say! x] (println "Dog said:" x))

Here I've just created channel and in the go-loop matched received messages from them with registered messages patterns.

Format of messages is [:name & args].

We can easily test dog object by putting message in the channel:

user=> (>!! dog [:bark!])
# Bark! Bark!

user=> (>!! dog [:say! "Hello world!"])

# Dog said: Hello world!

Looks awesome, but maybe we should add a state? It's pretty simple:

(def stateful-dog
  (let [calls (chan)]
    (go-loop [state {:barked 0}]
      (recur (match (<! calls)
               [:bark!] (do (println "Bark! Bark!")
                            (update-in state [:barked]
               [:how-many-barks?] (do (println (:barked state))

I've just put default state in the bindings for go-loop and recur it with new state after processing messages. And we can test it:

user=> (>!! stateful-dog [:bark!])
# Bark! Bark!

user=> (>!! stateful-dog [:how-many-barks?])
# 1

user=> (>!! stateful-dog [:bark!])
# Bark! Bark!

user=> (>!! stateful-dog [:bark!])
# Bark! Bark!

user=> (>!! stateful-dog [:how-many-barks?])
# 3

Great, but what if we want to receive result of the method? It's simple too:

(def answering-dog
  (let [calls (chan)]
    (go-loop [state {:barked 0}]
      (recur (match (<! calls)
               [:bark! _] (do (println "Bark! Bark!")
                              (update-in state [:barked]
               [:how-many-barks? result] (do (>! result (:barked state))

I've just set a channel as a last argument of the message and put result in it. It's not that simple to use like previous examples, but it's ok:

user=> (>!! answering-dog [:bark!  (chan)])
# Bark! Bark!

user=> (>!! answering-dog [:bark!  (chan)])
# Bark! Bark!

user=> (let [result (chan)]
  #_=>   (>!! answering-dog [:how-many-barks? result])
  #_=>   (<!! result))

Last call looks too complex, let's add a few helpers to make it easier:

(defn call!!
  [obj & msg]
  (let [result (chan)]
    (>!! obj (conj (vec msg) result))
    (<!! result)))

(defn call!
  [obj & msg]
  (let [result (chan)]
    (>! obj (conj (vec msg) result))
    (<! result)))

call! should be used only inside a go-block, call!! — outside. Let's look to them in action:

user=> (call!! answering-dog :how-many-barks?)

user=> (<!! (call! answering-dog :how-many-barks?))

user=> (call!! answering-dog :set-barks!)
# Exception in thread "async-dispatch-33" java.lang.IllegalArgumentException: No matching clause: [:set-barks!...

user=> (call!! answering-dog :how-many-barks?)
# ...

So now we have a problem, when error happens in a object – object dies and no longer sends responses to messages. So we should add try/except to all methods, better to use macros for automating that. But before we should define format of response:

  • [:ok val] – all ok;
  • [:error error-reason] – error happened;
  • [:none] – we can't put just nil in a channel, so we'll use this.

Yep, you can notice that this looks like Maybe/Option monad.

So let's write macroses:

(defn ok! [ch val] (go (>! ch [:ok val])))

(defn error! [ch reason] (go (>! ch [:error reason])))

(defn none! [ch] (go (>! ch [:none])))

(defmacro object
  [default-state & body]
  (let [flat-body (mapcat macroexpand body)]
    `(let [calls# (chan)]
       (go-loop ~default-state
         (recur (match (<! calls#)
                  [& msg#] (do (error! (last msg#) [:method-not-found (first msg#)])
                               ~@(take-nth 2 default-state)))))

(defmacro method
  [pattern & body]
  [pattern `(try (do ~@body)
                 (catch Exception e#
                   (error! ~(last pattern) e#)))])

Macro object can be used for creating objects and macro method — for defining methods inside the object. Here you could notice that [& msg#] works exactly like method_missing in Ruby.

So now we can create objects using this macroses:

(defn make-cat
  (object [state {:age 10
                  :name name}]
    (method [:get-name result]
      (ok! result (:name state))
    (method [:set-name! new-name result]
      (none! result)
      (assoc state :name new-name))
    (method [:make-older! result]
      (error! result :not-implemented)

(def cat (make-cat "Simon"))

We created object cat with methods get-name, set-name! and make-older!, make-cat is a improvised constructor. This object can be used like all previous objects, but in combination with core.match it'll be more useful:

user=> (match (call!! cat :get-name)
  #_=>   [:ok val] (println val))
# Simon

user=> (match (call!! cat :set-name! "UltraSimon")
  #_=>   [:none] (println "Name changed"))
# Name changed

user=> (match (call!! cat :get-name)
  #_=>   [:ok val] (println val))
# UltraSimon

user=> (match (call!! cat :make-older!)
  #_=>   [:ok age] (println "Now - " age)
  #_=>   [:error reason] (println "Failed with " reason))
# Failed with  :not-implemented

user=> (match (call!! cat :i-don't-know-what)
  #_=>   [:error _] (println "Failed"))
# Failed

Looks perfect! But that's not all, later I'll implement a inheritance on top of this mess.

by Vladimir Iakovlev at January 30, 2015 01:00 AM


Lein repl resets namespace after loading Clojure script

Strange problem, I know. So lein can be instructed to load a specific namespace when running the REPL with lein repl. That's great, let's assume I have a file called ns1.clj, so my project.clj file contains the line:

:repl-options {:init-ns ns1}

And, as expected, that file is loaded. However, I want to switch to another namespace (ns2) after ns1.clj does it's job, so I append the following to ns1.clj:

(ns ns2)

The problem is that Leiningen resets the REPL namespace to ns1 after the ns1.clj has finished. Is there any way to start the REPL by loading ns1.clj but not resetting the namespace post-load? By the way, I would assume that Leiningen should just execute the script and not set the namespace explicitly.

Background: I want to load a clj script and then switch to a namespace that has been loaded from an external source by that very script. So the logic in ns1.clj figures out what namespace should the REPL start in.

by Mate Varga at January 30, 2015 12:54 AM

Best way to install symfony2 with vagrant?

I have been trying to install symfony2 within my vagrant box ( ubuntu ) but it a not working for me i have tried couple of provision scripts on github but none worked for me. Do you guys have any link to a good tutorial?

by Sanket Patel at January 30, 2015 12:54 AM


DragonFly BSD Digest

BSDNow 074: That Sly MINIX

Episode 74 of BSDNow is up, with some interesting stories of Linux users switching to BSD, and an interview of Andrew Tanenbaum of MINIX fame.

by Justin Sherrill at January 30, 2015 12:44 AM


How to conditionally exclude a scenario in cucumber

I am trying to exclude scenarios programmatically in cucumber. Testcases are OS dependent in my case. Say if underlying OS is Windows, I would like to skip certain scenarios. After some research on google I found out that there a place where you can hook up this logic in ruby i.e. AfterConfiguration. However, I am not able to find where I can hook this up to cucumber through scala. I am also aware that it is not good practice to exclude scenarios but I have no choice.

by JavaMan at January 30, 2015 12:44 AM

Why doesn't the Scala compiler accept this lambda as a parameter?

Suppose I have an interface for a Thing:

abstract class Thing[A](a_thing: A) {
  def thingA = a_thing

and I implement that Thing as follows:

class SpecificThing(a: String) extends Thing[String](a)

Furthermore, suppose I have a function that takes a Thing and a lambda that does something to that Thing as parameters:

def doSomething[A](fn: Thing[A] => A, t: Thing[A]) : A = fn(t)

Now, let's use this stuff:

val st = new SpecificThing("hi")
val fn1: (Thing[String]) => String = (t: Thing[String]) => { t.thingA }
println(doSomething(fn1, st))

This prints hi. So far, so good. But I'm lazy, and I don't like typing so much, so I change my program to the following:

type MyThing = Thing[String]
val st = new SpecificThing("hi")
val fn2: (MyThing) => String = (t: MyThing) => { t.thingA }
println(doSomething(fn2, st))

and this also prints hi. Fabulous! The compiler can tell that a SpecificThing is both a Thing[String] and a MyThing. But what about this case?

val st = new SpecificThing("hi")
val fn3: (SpecificThing) => String = (t: SpecificThing) => { t.thingA }
println(doSomething(fn3, st))

Now I get:

Error:(14, 23) type mismatch;
 found   : SpecificThing => String
 required: Thing[?] => ?
  println(doSomething(fn3, st))

What's going on? What's a Thing[?]?

by Dan Barowy at January 30, 2015 12:41 AM

How do I break out of a loop in Scala?

How do I break out a loop?

var largest=0
for(i<-999 to 1 by -1) {
    for (j<-i to 1 by -1) {
        val product=i*j
        if (largest>product)
            // I want to break out here
              largest=largest max product

How do I turn nested for loops into tail recursion?

From Scala Talk at FOSDEM 2009 on the 22nd page:

Break and continue Scala does not have them. Why? They are a bit imperative; better use many smaller functions Issue how to interact with closures. They are not needed!

What is the explanation?

by TiansHUo at January 30, 2015 12:33 AM

Scala, Futures, WS library, Api

I have Play! on Scala application which communicates with another server by sending http request. That system has limitation: only 5 http requests can be proceeded simultaniously for one token.

I've written this method:


private def sendApiRequest(method: String, params: Option[JsValue] = None)(implicit token: String): Future[JsObject] = {

if (concurrentRequests.get(token).isEmpty) {
  concurrentRequests += token -> 1
} else {
  concurrentRequests += token -> (concurrentRequests.get(token).get + 1)

println(s"$token: ${concurrentRequests.get(token).get}")

val request = WS.url(API_URL)
                  "application_id" -> clientId,
                  "method" -> method,
                  "token" -> token,
                  "param" -> params

request.execute().map(response => {
  val result =[JsObject]
  if (!result.keys.contains("data")) {
    throw new Exception(result.toString())
  } else {

And there are actors which use this method and i get that exception after couple seconds.

My question is: How can i control number of features in 'RUNNING MODE'? May be i should use another execution context instead of default one? Explain me please or give good introduction for execution context, threads, etc

I want to get information from remote service as fast as possible not by sending one by one

Thank you!

by Alexander Kondaurov at January 30, 2015 12:32 AM


Tightest upper bound on length of distinguishing string in Hopcroft's algorithm

Hopcroft's algorithm is an algorithm for DFA minimization that produces a table identifying which pairs of states are distinguishable.

What is the tightest possible upper bound (with proof) on the minimum length of a string that distinguishes two distinguishable states?

This question is taken directly from Hopcroft, Mowtani, & Ullman, Exercise 4.4.3. I believe that the tightest upper bound is $\frac{(n-1)(n-2)}{2}-1$, but other sources I have found indicate that the tightest bound is $n-2$.

by Noah Schoem at January 30, 2015 12:30 AM


How to combine DataTables with other matchers?

I was trying to populate a container using specs2 DataTable and then check some conditions on it. The problem is that matchers after DataTable are ignored. Consider the code below

class MySpec extends Specification with DataTables {

"A Container" should {
"after data is added container should have the following data" in new TestContainer {
  "a"  | "flag" | "d"   |
  100  ! 1      ! "abc" |
  300  ! 1      ! "abc" |
  200  ! 0      ! "xyz" |>
  { (a, flag, d) =>
    container.add(Data(a, flag, d)) must not(throwA[Exception])
  container.size must_== 3 // Ignored
  1 must_== 2 // Ignored

Please let me know what am I missing and how to make lines marked as // Ignored to be validated.

by andruha at January 30, 2015 12:30 AM


Eshell on Windows still has messy codes

All is utf-8 and i can input Chinese very well except some apps output messy codes

~ $ 中华人民共和国

~ $ ntp


submitted by sw2wolf
[link] [1 comment]

January 30, 2015 12:18 AM


Twitter Finagle open too many files

I use Twitter-Finagle create a server. In each RPC function of the server, Just use a Finagle client to call another server's RPC. like this:

def rpc() = {
  // finagleClient is created in a std way according to Finagle's Doc:
  // val client = Thrift.newIface[Hello.FutureIface]("localhost:8080")
  val f: Future[xx] = finagleClient.otherRpc()
  f onSuccess { // do something }
  f onFailure { // handle exception }

But, not too long, error happens: Failed to accept a connection open too many files

And, I use lsof -p and find that there are too many connections to another server(about 5000 connections!). I want to know how does it happen? Is there anything I missed.

by hliu at January 30, 2015 12:02 AM

HN Daily

January 29, 2015


encapsulation for mixin's members in Scala

Traits in Scala can be used as both mixins and interfaces. It leads to some inconsistence - if I want to close some method inside trait, I just can't do that:

object Library {
    protected trait A { def a: Int = 5 }
    trait B extends A { private override def a: Int = super.a }
    //I want to close `a` memeber for all traits extending B; it's still possible to open it in some another trait `C extends A`, or even `Z extends B with C`

// Exiting paste mode, now interpreting.

<console>:10: error: overriding method a in trait A of type => Int;
 method a has weaker access privileges; it should not be private
           trait B extends A { private override def a: Int = super.a }

Such error is totally fine from LSP-perspective as I (or compiler) may want to cast it to the supertype A. But if i'm just using it as mix-in I never need to do that actually, for instance in some Cake-pattern variation. I'll do something like:

 import Library._
 object O extends B with K with L with App

and that's it. I can't even access trait A here. I know, there is type inference which may go up to super-type, but it's just a "line of types" so compiler could just skip A here and go on (of course it's very very theoretical). Another example - here I have to provide default implementation for method I don't really need.

The current solution I use is OOP-composition, but it's not so flexible (as linearization doesn't work here) and not much compatible with mix-ins concept. Some projects I've seen, they actually do mixins and have "over9000" redundand visible members. Several years ago there was an idea to "mark" such mixins composition by with keyword specified instead of extends, but can't even find that thread now.

So, is there any better practices for ad-hoc member encapsulation?

by dk14 at January 29, 2015 11:53 PM

What is the optimal way (not using Scalaz) to type require a non-empty List?

As I am working a design model, I am torn between two different methods of indicating a parameter of type List must be nonEmpty. I began by using List[Int] with an accompanying require statement to verify the List is nonEmpty.

case class A(name: String, favoriteNumbers: List[Int]) {
  require(favoriteNumbers.nonEmpty, "favoriteNumbers must not be empty")

I then needed to make the list optional. If the List is provided, it must be nonEmpty. I'm using using Option[List[Int]] with an accompanying require statement to verify, if the Option is nonEmpty, the list must also be nonEmpty.

case class B(name: String, favoriteNumbers: Option[List[Int]]) {
      favoriteNumbers.isEmpty || favoriateNumbers.get.nonEmpty
    , "when defined, favoriteNumbers.get must be nonEmpty"

However, I need to use this non-empty List all over the system I am modeling. This means that my code has these same require statements duplicated everywhere. Is there a (non-ScalaZ) way to have a new type, say NeList, which is defined and behaves identically to List, with the only change being an exception is thrown when NeList attempts to be instantiated with no elements?

I tried to Google for this and couldn't find a set of search terms to hone on this area. I either got really simple List how-tos, or all sorts of references to ScalaZ's NEL (Non Empty List). So, if there is a link out there that would help with this, I would love to see it.

by chaotic3quilibrium at January 29, 2015 11:49 PM


Calculation of Returns and Risk Metrics for L/S Portfolio

I am trying to build a test for a long/short portfolio. I am aiming for market neutral and have put together a long portfolio as well as a short portfolio (see below). However, I am not sure if I am doing this correctly. The specific questions I have are:

  1. Will betas be negative for the short book?
  2. How would I calculate returns for the L/S portfolio
  3. How would you factor in margin/borrowing costs in a model like this?
  4. Is there a different way that I need to calculate sharpe ratio since this is a L/S portfolio
  5. How should I use the benchmark (I have read elsewhere that the benchmark should be long for the long portfolio, short the same benchmark for the short portfolio, and then long a cash investment)?
  6. Is there anything else that I should be considering here?




by Andrei at January 29, 2015 11:30 PM



How does `lein deps' work?

Any people tell me how lein deps works? If lein finds the dependency, which is the required version of a project, in ~/.m2, will lein still download the same package again?

by Daniel Wu at January 29, 2015 11:17 PM

Adding inline images to Mailgun emails using Scala and Play WS

I can succesfully make POST requests to Mailgun and receive the emails as expected. I'm trying to inline an image into an email and can't work out how to do it.

Looking at and selecting Java, I can see that the example given constructs a FileDataBodyPart with "inline", the File reference and the MediaType. Looking at the curl example, this seems rather unnecessary as that just references a file.

Here is my method for sending an email:

  def send(message:EmailMessage) = {
    val postMessage = Map("from" -> Seq(message.from), "to" -> Seq(, "subject" -> Seq(message.subject), "text" -> Seq(message.text), "html" -> Seq(message.html.toString()))
    val logo = FileBody(Play.getExistingFile("/public/images/logo.png").get)
    WS.url(apiUrl).withAuth("api", myKey, WSAuthScheme.BASIC).withBody(logo).post(postMessage)

The message.html.toString looks like the following:

    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
<body style="background-color:#9B59B6; padding:10px">
    <img src="cid:logo.png">
    <h1 style="color:#FFF">Activate!</h1>

The logo.png file is found when sending the email and the email comes through fine, but with no image. This is what the email source looks like once it arrives at gmail:

Mime-Version: 1.0
Content-Type: text/html; charset="ascii"
Content-Transfer-Encoding: 7bit

        <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
        <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <body style="background-color:#9B59B6; padding:10px">
        <img src="cid:logo.png">
        <h1 style="color:#FFF">Activate!</h1>

I can't see any base64 encoding of the image in the email. As the curl example appeared to just be passing a file as part of the POST, I though I'd try that. Here is what I did:

  def send(message:EmailMessage) = {

    val logoFile = Play.getExistingFile("/public/images/logo.png").get
    val source = Files.readAllBytes(Paths.get(logoFile.getAbsolutePath))
    val logoBase64 = Base64.encodeBase64String(source)

    val postMessage = Map("from" -> Seq(message.from), "to" -> Seq(, "subject" -> Seq(message.subject), "text" -> Seq(message.text), "html" -> Seq(message.html.toString()), "inline" -> Seq(logoBase64))
    WS.url("").withAuth("api", "key-f165695d4c72e929ff8215115e648c95", WSAuthScheme.BASIC).post(postMessage)

I converted the logo into base64 and POSTed that like the other parameters. Still no joy.

What am I missing here? Do I need to pass this in the body, but somehow specify that this is an "inline" file?

by Arthur at January 29, 2015 11:15 PM


What is the Algorithm to find all the possible chordal graphs which can be formed by a given 'n' number of vertices

A chordal Graph is a connected graph which contains no chord-less cycle of size greater than three. They are also called as Triangulated graphs.

All Paths are Chordal Graphs (No cycles).

All Trees are chordal Graphs (No cycles).

All cliques are chordal graphs.

A Path contains the minimum no of edges and a clique contains the maximum number of edges of a given n vertices chordal graph

Wiki Link for more explanation of chordal graphs

So for a fixed number 'n', what is the algorithm to find all the possible chordal graphs of n vertices?

For example:

  • n= 2, Answer = 1 chordal graph.
  • n= 3, Answer = 2 chordal graphs,
  • n= 4, Answer = 5 chordal graphs,
  • n= 5, Answer = 15 chordal graphs.

The above are determined by drawing all possible examples. Any algorithm?

by rahulyadav at January 29, 2015 11:05 PM


Time complexity for searching $k-th$ element from starting and ending of a linked list

What are the time complexities of finding $8th$ element from beginning and $8th$ element from end in a singly linked list? Let $n$ be the number of nodes in linked list, you may assume that $n > 8$.

The answer is given as $O(1)$ and $O(n)$.
What I learnt till now is that searching operation in linked lists takes linear time since it doesn't have indexes like arrays. Then why does searching the $8th$ element would take constant time?
Further explanation for the answer is as follows :

Finding 8th element from beginning requires 8 nodes to be traversed which takes constant time. Finding 8th from end requires the complete list to be traversed.

Can someone explain me the concept behind this?

by Sidsec9 at January 29, 2015 10:59 PM


Running application with sbt and Akka Microkernel

Can I use both in my project?

Previously I had only object extending App trait, but since I started using Microkernel I need to have class extending Bootable trait.

Let's say that I have something like this:

lazy val backend = (project in file("backend"))
    name := "backend",
    mainClass in Compile := Some("backend.Backend"),
    libraryDependencies ++= Dependencies.backend,
    javaOptions in run ++= Seq("-Djava.library.path=./sigar"),
    fork in run := true)

and Backend class like this:

class Backend extends Bootable {

      val system = ActorSystem("mobile-cluster")

      def startup() = {
        FactorialBackend startOn system

      def shutdown() = {

I cannot start app with sbt run (there is an error about missing static main method), but it works with Microkernel, when I run sbt stage and next start application using generated script it works fine.

When I'm using something like this:

object Backend extends App {

      val system = ActorSystem("application")

      FactorialBackend startOn system

I can start app with sbt "project backend" "run", but Microkernel doesn't work anymore.

What can I do with that? Should I have separate files for starting application with Microkernel and sbt or separate build configurations?

I need to have a production version of application using Microkernel and I also want to just run and debugging my application during development using sbt.

I tried to use the same class or object extending both, App and Bootable trait or configure to have separate configuration for Microkernel and sbt run, but it didn't help.

by Piotr Kozlowski at January 29, 2015 10:50 PM

How to publish an rpm to an Artifactory-hosted Yum repository using sbt-native-packager?

I am trying to publish an rpm built with sbt-native-packager to a yum repository hosted by Artifactory. My project uses the JavaAppPackaging Archetype. I want to publish to the path of my yum repo which is /rhel/linux/7/x86_64. Unfortunately, it is always published to a maven-like path, /artifactId/version/mypackage-version-arch.rpm

Is there some setting I can change to accomplish this?

by BryanWB at January 29, 2015 10:37 PM



How to make a port recalculate dependencies

I am trying to install mail/pine-pgp-filters on my FreeBSD box, but I am running into a problem. I first tried to install it without having GPG installed, and it listed security/gpg1 as a dependency. I wanted gpg2 (security/gpg), and so I built and installed that. I then attempted to re-install pine-pgp-filters, but it still prompted me to install gpg1.

I have confirmed that it is compatible with gpg2, and this segment of the Makefile should take care of which version to use:

# We want to be version-agnostic here, but also record the right dependency
# if the user installs the package and already has one or the other installed.
.if exists(${LOCALBASE}/bin/gpg2)
BUILD_DEPENDS=  gpg2:${PORTSDIR}/security/gnupg
RUN_DEPENDS+=   gpg2:${PORTSDIR}/security/gnupg
BUILD_DEPENDS=  gpg:${PORTSDIR}/security/gnupg1
RUN_DEPENDS+=   gpg:${PORTSDIR}/security/gnupg1

So, my question is: how do you make a port re-consider its dependencies? And if that isn't my problem, then what is?

I am happy with solutions using ports directly, portmaster, pkg, whatever.

by felixphew at January 29, 2015 10:26 PM


How to create custom directive to reuse routes?

I have a route snippet that I want to reuse in multiple scenarios:

val dirSegment = "licenses"
path( dirSegment ~ PathEnd ) {
  redirect( dirSegment + "/", StatusCodes.MovedPermanently ) 
} ~ 
pathPrefix(dirSegment) { 
  path("") {
    /* do something */

I'd like to turn this in to a directive (or parameterizable route?) where I can specify the value of the dirSegment val and arbitrary further routing/code in place of path("") { /* do something */ } white retaining the redirect behavior, looking something like the following:

directoryPath("licenses") {
  path("") {
    /* do something */
} ~ 
directoryPath("about") {
  path("") {
    /* do somthing else */

Whereas that would have equivalent behavior to the following without all the repetition:

val dirSegment = "licenses"
val anotherDir = "About"

path( dirSegment ~ PathEnd ) {
  redirect(dirSegment + "/", StatusCodes.MovedPermanently ) 
} ~ 
pathPrefix(dirSegment) { 
  path("") {
    /* do something */
} ~
path( anotherDir ~ PathEnd ) {
  redirect(anotherDir + "/", StatusCodes.MovedPermanently ) 
} ~ 
pathPrefix(anotherDir) { 
  path("") {
    /* do something else */

Note that this question was inspired by some of the discussion in How do I automatically add slash to end of a url in spray routing?

by Connie Dobbs at January 29, 2015 10:23 PM

What is the purpose of AbstractSystem trait in akka-spray-websocket activator?

I'm learning Spray and Akka. And I'm learning it through TypeSafe's templates, and this one is very complex at least:

I now understand the werid structure this template has is to separate routing logic and business logic and it's amazingly done. However, although I know the purpose of this structure, I don't know what's the functionality of this small piece and why is it necessary:

They have a class called MainActors.scala:

trait MainActors {
  this: AbstractSystem =>

  lazy val find = system.actorOf(Props[FindActor], "find")
  lazy val hide = system.actorOf(Props[HideActor], "hide")

Then the template concatenates all the routings under a class called ReactiveApi.scala:

trait AbstractSystem {
  implicit def system: ActorSystem

trait ReactiveApi extends RouteConcatenation with StaticRoute with AbstractSystem {
  this: MainActors =>

  val rootService = system.actorOf(Props(classOf[RootService], routes))

  lazy val routes = logRequest(showReq _) {
    new FindService(find).route ~
    new HideService(hide).route ~

  private def showReq(req : HttpRequest) = LogEntry(req.uri, InfoLevel)

Actually, my question is simple: what is the purpose of AbstractSystem trait? how is it used and why is it used?

This trait is also passed into actual actor:

class FindService(find : ActorRef)(implicit system : ActorSystem) extends Directives {
  lazy val route = ...

Also, if it is not entirely inconvenient, what's the functionality of logRequest() and showReq()?

For Spray: why do I have to pass an actor (ActorRef) into FindServce? I don't see any specific methods being invoked from inside.

by Wind Dweller at January 29, 2015 10:19 PM

Is there a central site/page for "advanced Scala" topics?

Despite having read "Programming in Scala" several times, I still often finds important Scala constructs that were not explained in the book, like


and other strange constructs like

new { ... }  // No class name!

and so on.

I find this rather frustrating, considering that the book was written by the Scala "inventor" himself, and others.

I tried to read the language specification, but it's made for academics, rather than practicing programmers. It made my head spin.

Is there a website for "Everything "Programming in Scala" Didn't Tell You" ?

There was the daily-scala Blog, but it died over a year ago.

by Sebastien Diot at January 29, 2015 10:07 PM


Ich hatte vor ner Weile über diesen Service in England ...

Ich hatte vor ner Weile über diesen Service in England berichtet, und einer meiner Leser hat sich entschieden, das mal auszuprobieren.

Er ist also zum Royal Observatory in Greenwich gefahren, dem perfekten Ort für so eine Observation, und hat das dort mal ausprobiert. Schön mit dem Schriftzug ROYAL OBSERVATORY im Hintergrund! Großartig :-)

January 29, 2015 10:01 PM

Wenn man sich beim All Souls College der Universität ...

Wenn man sich beim All Souls College der Universität Oxford auf ein Stipendium bewirbt, dann geben sie einem einen Fragebogen mit Fragen. Diese Fragen sind so dermaßen absolut großartig, dass ihr euch alle mal einen Tag frei nehmen solltet, um sie alle durchzulesen und über jede mal kurz nachzudenken. Hier sind ein paar Beispiele.

Hier gibt es das offizielle Archiv.

Die Generellen Fragen finde ich die tollsten. Die sollte man eigentlich mal auf einem kleinen Spickzettel dabei haben, und wenn man an einem geselligen Abend jemanden trifft und kennenlernen will, dann kann man eine dieser Fragen diskutieren. So Dinge wie "Brauchen wir Grenzen?" oder "Hat Eva richtig gehandelt?" Da kommt es nicht auf die Antwort an, sondern man will den Denk- und Argumentationsprozess sehen.

January 29, 2015 10:01 PM

Good news everybody! RFC 20 ("ASCII format for network ...

Good news everybody! RFC 20 ("ASCII format for network interchange") ist jetzt Internet-Standard. Ich für meinen Teil hätte ja vorgeschlagen, da lieber UTF-8 festzulegen. Aber hey, was weiß ich schon.

January 29, 2015 10:01 PM


What is the purpose of the state monad?

I am a JavaScript developer on a journey to up my skills in functional programming. I recently ran into a wall when it comes to managing state. When searching for a solution I stumbeled over the state monad in various articles and videos but I have a really hard time understanding it. I am wondering if it is because I expect it to be something it is not.

The problem I am trying to solve

In a web client I am fetching resources from the back end. To avoid unnecessary traffic I am creating a simple cache on the client side which contains the already fetched data. The cache is my state. I want several of my modules to be able to hold a reference to the cache and query it for its current state, a state that may have been modified by another module.

This is of course not a problem in javascript since it is possible to mutate state but I would like to learn more about functional programming and I was hoping that the state monad would help me.

What I would expect

I had assume that I could do something like this:

var state = State.of(1);
map(add(1), state);
state.evalState() // => 2 

This obviously doesn't work. The state is always 1.

My question

Are my assumptions about the state monad wrong, or am I simply using it incorrectly?

I realize that I can do this:

var state = State.of(1);
var newState = map(add(1), state);

... and newState will be a state of 2. But here I don't really see the use of the state monad since I will have to create a new instance in order for the value to change. This to me seems to be what is always done in functional programming where values are immutable.

by Ludwig Magnusson at January 29, 2015 10:01 PM




What are the approaches to deploy static files so Spray can serve them?

I have a html page I would like to serve it in spray server.

I am familiar with

  1. How can I create an archive that can be deployed to Spray (similarly to Tomcat's .war files that can be deployed to webapps directory)?

  2. How to copy files from one directory to other with SBT (like we have copy task for ant build tool)

  3. referring to "will serve from a JAR file. In this case client is a directory that has been packaged in a JAR archive. BTW this works in dev mode without packaging JAR as well."

My project stracture is like

src/main/resources/<files to include in main jar here>
src/main/scala/<main Scala sources>
src/main/java/<main Java sources>
src/main/test/resources/<files to include in test jar here>
src/main/scala/<test Scala sources>
src/main/java/<test Java sources>

so where I need to keep the directory called "client". where I need to keep my .html files and js files my html refers.

if I issue a package command at SBT interactive mode it will give me jar. how can I run that jar from build.sbt or build.scala

by Vithre at January 29, 2015 09:57 PM

How to match specific accept headers in a route?

I want to create a route that matches only if the client sends a specific Accept header. I use Spray 1.2-20130822.

I'd like to get the route working:

def receive = runRoute {
    get {
      path("") {
        accept("application/json") {

Here I found a spec using an accept() function, but I can't figure out what to import in my Spray-Handler to make it work as directive. Also, I did not find other doc on header directives but these stubs.

by rompetroll at January 29, 2015 09:51 PM

Is it possible to have Nashorn load scripts from classpath?

Is it possible to have Nashorn's load method use the project's classpath when resolving URIs?

Here's what I'm attempting to do:

(defn create-engine
  "Creates a new nashorn script engine and loads dependencies into its context."
  (let [nashorn (.getEngineByName (ScriptEngineManager.) "nashorn")
        scripts (map #(str "load('" % "');") dependencies)]
    (.eval nashorn "var global = this;")
    (doseq [script scripts] (.eval nashorn script))

(def app "public/javascripts/app.js") ; in /resouces, on classpath

; resulting exception:
javax.script.ScriptException: TypeError: 
Cannot load script from public/javascripts/app.js in <eval> at line number 1

by pdoherty926 at January 29, 2015 09:50 PM

how to get the version of the current clojure project in the repl

Is it possible to grab the project information within the clojure repl?

For example.... if there was a project defined:

(defproject blahproject "0.1.2"....)

and running a repl in the project directory

is there a function like this?

> (project-version) 
;=> 0.1.2 

by zcaudate at January 29, 2015 09:49 PM

How to mix in traits with implicit vals of the same name but different types?

I have traits from two third party libraries that I'm trying to mix in to my own trait. They both define implicit vals named log.

However, they are of different types - one is an SLF4J Logger, the other is a Spray LoggingContext (which is really an Akka LoggingAdapter). In fact the second trait is from Spray, it's an HttpServer. (Not the most recent version you can find on Github which no longer has that val).

So, here's the code (library one renamed because it's proprietary, the Spray code snipped to show just relevant part):

object LibraryOneShim {
    trait LibraryOne {
        implicit val log: org.slf4j.Logger = ...

trait HttpService extends Directives {
    val log = LoggingContext.fromActorRefFactory // this is a LoggingContext/LoggingAdapter

trait MyTrait extends HttpService with LibraryOne {
    val myRoute = ...

class MyActor extends Actor with MyTrait {
    def receive = runRoute(myRoute)

This won't compile. The compiler complains:

error: overriding lazy value log in trait HttpService of type java.lang.Object with spray.util.LoggingContext; lazy value log in trait LibraryOne$class of type org.slf4j.Logger needs `override' modifier trait DemoService extends HttpService with LibraryOne {

Is there any way I can mix in these two traits together?

by ryryguy at January 29, 2015 09:48 PM


Can you make a process pool with shell scripts?

Say I have a great number of jobs (dozens or hundreds) that need doing, but they're CPU intensive and only a few can be run at once. Is there an easy way to run X jobs at once and start a new one when one has finished? The only thing I can come up with is something like below (pseudo-code):

pids=(); # hash/associative array
while (jobs); do
    while (cur_jobs < MAX_JOBS); do
        pop and spawn job and store PID and anything else needed;
    sleep 5;
    for each PID:
        if no longer active; then
            remove PID;

I feel like I'm over-complicating the solution, as I often do. The target system is FreeBSD, if there might be some port that does all the hard work, but a generic solution or common idiom would be preferable.

by Jason Lefler at January 29, 2015 09:34 PM


How to access sbt managed resource in Scala program

I create a managed resource through the following code in build.sbt:

resourceGenerators in Compile <+=
  (resourceManaged in Compile, name, version) map { (dir, n, v) =>
    val file = dir / "version"
    val contents = Process("git rev-parse HEAD").lines.head
    IO.write(file, contents)

I can see it well created under target/scala-2.11/resource_managed/main

I extract its contents in my application as follows:

  val version = getClass.getResource("version")

I wonder if there's a Scala class for accessing resources, that is more preferable than Java's getClass.getResource

by matt at January 29, 2015 09:33 PM

What is the implicit resolution sequence in this "simple" ScalaZ tutorial code example?

The code snippet below is taken from this ScalaZ tutorial.

I cannot figure out how the implicit resolution rules are applied when evaluating 10.truthy at the bottom of the code example.

Things that - I think - I do understand are the following:

1) The implicit value intCanTruthy is an instance of an anonymous subclass of CanTruthy[A] which defines the truthys method for Int-s according to :

scala> implicit val intCanTruthy: CanTruthy[Int] = CanTruthy.truthys({
         case 0 => false
         case _ => true
intCanTruthy: CanTruthy[Int] = CanTruthy$$anon$1@71780051

2) The toCanIsTruthyOps implicit conversion method is in scope when evaluating 10.truthy, so the compiler will try to use this implicit conversion method when it sees that Int does not have a truthy method. So the compiler will try to look for some implicit conversion method which converts 10 into an object that does have a truthy method and therefor it will try toCanIsTruthyOps to this conversion that.

3) I suspect that the implicit value intCanTruthy somehow might be used when the compiler tries the toCanIsTruthyOps implicit conversion on 10.

But this is where I really get lost. I just don't see how the implicit resolution process proceeds after this. What happens next ? How and Why ?

In other words, I don't know what is the implicit resolution sequence that allows the compiler to find the implementation of the truthy method when evaluating 10.truthy.


How will 10 be converted to some object which does have the correct truthy method ?

What will that object be ?

Where will that object come from?

Could someone please explain, in detail, how the implicit resolution takes place when evaluating 10.truthy ?

How does the self-type { self => ... in CanTruthy play a role in the implicit resolution process ?

scala> :paste
// Entering paste mode (ctrl-D to finish)

trait CanTruthy[A] { self =>
  /** @return true, if `a` is truthy. */
  def truthys(a: A): Boolean
object CanTruthy {
  def apply[A](implicit ev: CanTruthy[A]): CanTruthy[A] = ev
  def truthys[A](f: A => Boolean): CanTruthy[A] = new CanTruthy[A] {
    def truthys(a: A): Boolean = f(a)
trait CanTruthyOps[A] {
  def self: A
  implicit def F: CanTruthy[A]
  final def truthy: Boolean = F.truthys(self)
object ToCanIsTruthyOps {
  implicit def toCanIsTruthyOps[A](v: A)(implicit ev: CanTruthy[A]) =
    new CanTruthyOps[A] {
      def self = v
      implicit def F: CanTruthy[A] = ev

// Exiting paste mode, now interpreting.

defined trait CanTruthy
defined module CanTruthy
defined trait CanTruthyOps
defined module ToCanIsTruthyOps

Trying out the type class on 10 :

scala> import ToCanIsTruthyOps._
import ToCanIsTruthyOps._

scala> implicit val intCanTruthy: CanTruthy[Int] = CanTruthy.truthys({
         case 0 => false
         case _ => true
intCanTruthy: CanTruthy[Int] = CanTruthy$$anon$1@71780051

scala> 10.truthy
res6: Boolean = true

by jhegedus at January 29, 2015 09:20 PM


What is "potential speedup" in parallel computing?

There is an example problem from p506 of Computer Organization and Design, Fifth Edition: The Hardware/Software interface by David A. Patterson, John L. Hennessy enter image description here

I wonder how "potential speedup" is defined? The book doesn't give its definition.

In the example, since speedup with $10$ processors is $55\%$ of the potential speedup, the potential speedup should be $ 5.5 / 55\% = 10$. it is equal to the number of processors.

Since the execution time before improvement has a part $(10t)$ unaffectable by parallel computing on multiple processors, according to Amdahl's law (outlined in blue box), the potential speedup must be smaller than the number of processors.

So I am puzzled.

Google books has an earlier (4th) edition which has a similar example too.

by Tim at January 29, 2015 09:18 PM




Eines der Probleme mit Kryptographie ist ja immer, ...

Eines der Probleme mit Kryptographie ist ja immer, dass man da zwar irgendwelche langen Schlüssel hat, aber die kann sich ja keiner merken. Also tut man sie auf einen Speicher, und der kann dann vom Unterdrückungsstaat beschlagnahmt werden. Daher tut man ein Passwort dran. Aber dessen Herausgabe kann der Unterdrückungsstaat erzwingen.

Was also tun? Vorschläge wie "dann baut man halt Deniability ein" gab es relativ früh im Bezug auf Krypto-Dateisysteme. Die Idee kam m.W. von Julian Assange, damals, als er noch Hacker und nicht Messias war. Da hat man dann zwei Passphrases. Eine schließt die Daten auf, und eine schließt ein harmloses Dateisystem auf, das man in solchen Fällen vorzeigen kann. Aber der Effekt auf einen Unterdrückungsstaat ist ja eher der gegenteilige des Gewünschten. Im Zweifelsfall werden die dann niemandem mehr glauben, dass er ihnen die richtige Passphrase gegeben hat, und dann sitzt man halt bis zum Lebensende in Beugehaft.

2012 gab es ein Paper, das ich verpasst habe, und auf das gerade Bruce Schneier in seinem Blog aufmerksam gemacht hat: Dort geht es darum, ein Passwort so unterbewusst zu lernen, dass man es zwar eingeben kann, aber es nicht bewusst aufsagen kann. Das hilft natürlich gegen einen Unterdrückungsstaat auch nur bedingt. Aber ausgesprochen spannend ist es natürlich trotzdem.

January 29, 2015 09:02 PM

Die Russen fangen endlich auch damit an, angebliches ...

Die Russen fangen endlich auch damit an, angebliches Geheimdienst-Raunen als Beweisführung zu verkaufen :-)

Wieso sollten die sich auch an journalistische Grundsätze halten müssen, wenn bei den Amis Colin Powell mit ein paar Photoshops und Powerpoint ausreicht als Beweis für eine Invasion.

January 29, 2015 09:02 PM

Ooooooh, na darauf warte ich ja schon länger:Der Tatverdächtige ...

Ooooooh, na darauf warte ich ja schon länger:
Der Tatverdächtige ist einer der wichtigsten Funktionäre bei der Gewerkschaft der Polizei (GdP).

January 29, 2015 09:02 PM

Habt ihr euch Sorgen gemacht, dass der Verfassungsschutz ...

Habt ihr euch Sorgen gemacht, dass der Verfassungsschutz nach dem Krieg lauter Alt-Nazis aufgenommen hat?

Da habt ihr natürlich völlig Recht, genau das ist geschehen, aber es gibt dennoch Entwarnung, denn:

Denn die Kontinuität zu den NS-Geheimdiensten war eher dünn und das Amt auf dem rechten Auge auch nicht blinder als andere Behörden.
In der Praxis sah das dann so aus:
das System zweierlei Mitarbeiter in den Nachkriegsjahren: unbelastete Angestellte, die zur Legende des Neuanfangs passten, und frühere Leute von Gestapo oder SS, die mit falschen Namen, bei Tarnfirmen oder Landesbehörden aktiv waren, aber in Wahrheit vom Bundesamt bezahlt wurden. In Köln verstand man es damals trickreich, den Schein zu wahren.
Aber ist alles nicht so schlimm, denn die meisten Alt-Nazis waren schon beim BKA in Lohn und Brot. Außerdem bestanden die Alliierten darauf, dass da keine SS, SD oder Gestapo-Mitarbeiter eingestellt wurden. Also haben sie die halt als "freie Mitarbeiter" geführt.
Gustav Halswick, ein früherer SS-Sturmbannführer, baute ein Scheinunternehmen auf, die Firma „Dokumentenforschung“, wo man die Freien bezahlen und sozial absichern konnte. Selbst das Finanzamt war eingeweiht; kein Steuerprüfer sollte den Schwindel aufdecken.
Die besten Entnazifizierung, die man für Geld kaufen kann!

January 29, 2015 09:02 PM

Die Edathy-Aussagen im Untersuchungsausschuss sind ...

Die Edathy-Aussagen im Untersuchungsausschuss sind jetzt von Zeugen bestätigt worden. Was für eine hocherfreuliche Entwicklung! Und ab in den Knast mit der SPD. Drogen, Kinderpornos, Strafvereitelung, Strafvereitelung im Amt, Falschaussage, ... weswegen die jetzt im Einzelnen einfahren ist mir persönlich nicht so wichtig. Hauptsache die kommen von der Straße weg.

January 29, 2015 09:02 PM

Brüller der Woche: "Die PEGIDA hatte Recht. Wir hatten ...

Brüller der Woche: "Die PEGIDA hatte Recht. Wir hatten Unrecht."

Dort rechnen sie anhand des Migrationsberichtes der Bundesregierung für 2013 vor, welches die Top-5-Herkunftsländer von islamistischen Abendlandszerstörern sind.

  1. Die islamische Republik Polen
  2. Das Kalifat Rumänien
  3. Die Vereinigten Emirate von Bulgarien
  4. Das Sultanat von Italien
  5. Saudi-Spanien
Wartet, wird noch besser: Die Einwanderung aus der Türkei ist negativ. Mehr Leute gehen von Deutschland in die Türkei als umgekehrt.

January 29, 2015 09:02 PM

Wie funktioniert eigentlich Geheimdienstkontrolle in ...

Wie funktioniert eigentlich Geheimdienstkontrolle in der Praxis?

Gar nicht. Schon klar. Ich meinte jetzt abgesehen davon. Was tun die da, um sich davon zu überzeugen, dass das schon alles rechtens ist?

Heise berichtet aus dem NSA-Ausschuss:

Konstantin von Notz (Grüne) sprach von einer reinen "Schlüssigkeitsprüfung", was Golke nach einer seiner vielen Überlegenspausen bejahte. Letztlich wisse nur der BND selbst, ob die tatsächlich eingesetzte Maschine dieselbe sei wie die im Entwicklungsstadium begutachtete: "Man muss sich eh auf die überprüfte Stelle verlassen." Hinterher könnten Geräte beliebig zusammengestöpselt werden. Ihm fehlten Rechte für spätere Inspektionen beim BND.
Aber wartet, geht noch besser! Es gab da nämlich einen Laborbesuch beim BND. Der lief wie folgt ab:
Beim Laborbesuch beim BND habe es eine wichtige Rolle gespielt, "Vertrauen aufzubauen", dass "die das so machen, wie in den Dokumenten beschrieben". Es sei wichtig, "welches Gefühl ich da habe".
Ja super! Der hat den BND-Ausflug mit einem Besuch beim Psychologen verwechselt!

Soviel dazu. Ich fände es ja total knorke, wenn wir da mal von faith based auf fact based umstellen könnten.

January 29, 2015 09:02 PM


Clojure: How to find out the arity of function at runtime?

Given a function object or name, how can I determine its arity? Something like (arity func-name) .

I hope there is a way, since arity is pretty central in Clojure

by GabiMe at January 29, 2015 09:01 PM



Error: jar not found in bash after installing OpenJDK

I installed openJDK. java -version

OpenJDK Runtime Env. IcedTea6 1.13.4 (...)
OpenJDK Client VM (build23.25...)

when I execute jar xvf myfile.war I get error:

bash: jar : command not found

by P.Brian.Mackey at January 29, 2015 08:48 PM


Number of ways to extend almost independent sets of graph

Given $G$ a regular graph on $n$ vertices denote $\alpha(G)>1$ to be independence number.

Denote $\Gamma(G)$ to be collection of possible subset of independent vertices in $G$ of cardinality $\alpha(G)-1$.

To each $\gamma\in\Gamma(G)$, assign a number $N(\gamma)$ to be number of ways $\gamma$ could be extended by an additional vertex so that augmented subset remains independent (attains cardinality number $\alpha(G)$).

Denote $N(G)=\max_{\gamma\in\Gamma(G)}N(\gamma)$.

Denote $M(G)$ to be maximum number of disjoint independent sets of $G$ that attain cardinality $\alpha(G)$.

Given graph, show or disprove that $$\frac{\sqrt{M(G)}}{\max(\alpha(G),N(G))}=O(1)?$$

Does $N(G)$ have a terminology?

by Turbo at January 29, 2015 08:47 PM


How to convert a result of ask to appropriate type?

I'm using ask (?) to get a value which is of type Set[String] from an Actor. However, the actor returns Future[Any].

What is the correct way to convert this Future[Any] to Future[Set[String]]?

val result : Future[Any] = myactor ? GetSomeValue
//convert Future[Any] to Future[Set[String]]

by Soumya Simanta at January 29, 2015 08:45 PM

How to chain Future's of client requests without nesting with onComplete?

I need to query a RESTful service that always returns a JSON response. I need to contact it a few times, always with some more information that I learned from the previous request. I'm using Akka2, Scala, Jerkson and Spray-Can.

My current approach seems to work, but it looks ugly and requires nesting everything. I read that there should be some techniques available regarding chaining and such, but I couldn't figure out how to apply them to my current use-case.

Here is the code I'm talking about:

def discoverInitialConfig(hostname: String, bucket: String) = {

    val poolResponse: Future[HttpResponse] = 
      HttpDialog(httpClient, hostname, 8091)
      .send(HttpRequest(uri = "/pools"))

    poolResponse onComplete { 
      case Right(response) =>
        log.debug("Received the following global pools config: {}", response)

        val pool = parse[PoolsConfig](response.bodyAsString)
          .find(_("name") == defaultPoolname)

        val selectedPoolResponse: Future[HttpResponse] =
          HttpDialog(httpClient, hostname, 8091)
          .send(HttpRequest(uri = pool("uri")))

        selectedPoolResponse onComplete {
          case Right(response) =>
            log.debug("Received the following pool config: {}", response)


          case Left(failure) =>
            log.error("Could not load the pool config! {}", failure)

      case Left(failure) =>
        log.error("Could not load the global pools config! {}", failure)

I think you can see the pattern. Contact the REST service, read it, on success parse it into a JSON case class, extract information out and then do the next call.

My structure here is only two-levels deep but I need to add a third level as well.

Is there a technique available to improve this for better readability or can I only stick with this? If you need any further information I'm happy to provide it. You can see the full code here:

Thanks, Michael

by moidaschl at January 29, 2015 08:42 PM


Turing Machine notation

I'm a bit confused on some of the notation being used for turing machines in one of our exercises in class.

The question gives us a string $\alpha \in \{0,1\}$* and the function $\mathsf{int}(\alpha)$ that changes a binary number to its base 10 form. (example, $\mathsf{int}(00010) = \mathsf{int}(10) = 2$)

Now comes the tricky part: Define the language $L ⊆ \{0, 1, \#\}*$ by:

$L = \{\alpha\#\beta | α, \beta ∈ \{0, 1\}$* and $|\beta| ≥ \mathsf{int}(\alpha) ≥ 1 $ and $\beta \{ \mathsf{int}(\alpha) \} = 1 **\}$.

This bolded section is confusing me, and it seems like there are different variations of T.M. notation as well...

Could someone give me a rough approximation of what this might mean?

Extra: Examples for language $L$:
$\#111 \notin L$
$00010\#11100 \in L$
$00011\#010111 \notin L$
$00011\#11 \notin L$
$1\#\#1 \notin L$

Thanks in advance.

by Akatzki at January 29, 2015 08:39 PM


How to use cookie and BASIC authentication together?

I am using a combination of cookie and basic authentication. In the basic authentication, it takes a function

Option[UserPass] => Future[Option[T]]

and returns a Directive[T].

I wish to create a directive on cookie which takes a function

HttpCookie => Future[T]

and returns a Directive[T].

Hence I can do a combined auth directive of cookieAuth | basicAuth.

The closest I could get is:

def myFunction:HttpCookie => Future[String]

val cookieAuth:Directive[String] = cookie("MyCookie").flatMap { cookie =>

But the signatures do not match. I get the exception:

type mismatch;
  found   : spray.routing.Directive[shapeless.::[String,shapeless.HNil]]
  required: spray.routing.Directive[String]

by J Pullar at January 29, 2015 08:38 PM


What is meant by the notation $L(...)$?

I am currently studying about formal languages and automata. I am trying to solve a problem but there is a notation whose meaning I'm not sure of.

I have a question to find out the relationship between two languages $L_1$ and $L_2$:


$S \to aSa|bS|e$

$L_2 \to L((ab+ba)^*)$

My question is, does it mean that $L(ab+ba)$ is the set $\{ab, ba\}$?

I mean, $L(ab+ba) = L(ab)\cup L(ba)$

by ka8512 at January 29, 2015 08:37 PM

If $g ∘ f$ is primitive recursive, are $f$ and $g$, too?

Assuming I have functions $f, g : \mathbb{N} \to \mathbb{N}$ and I know that $g \circ f$ is a primitive recursive function. What can I tell about $f$ and $g$? Are they primitive recursive as well? Or at least one of them?

by erb at January 29, 2015 08:37 PM

Heuristic for sokoban puzzle problem

I am trying to write IDA* for Sokoban puzzle problem ( but It seems that my heuristic is not so good for making the algorithm fast.

My heuristic is to consider $h(s)$ as minimum distance between the Player and Boxes plus minimum distance between a Box and a Target in state $s$.

Can you please provide some better heuristics for this problem?

Edit: consider the number of Boxes(and Targets) is at most 3!

by CoderInNetwork at January 29, 2015 08:37 PM

How does this proof show that sequences of $O(1)$ polynomially bounded Kolmogorov complexity are NOT the polynomial computable ones?

Theorem 19 For every recursive time bound f , there is an infinite sequence that is in $C4[O(1), poly]$ but not $DTIME(f (n))$-computable. (The sequence is automatically in $C5[O(1), poly]$ and in $CK[O(1), poly]$.)

Proof: The main idea is to build a tally set $T\subseteq\{0\}^*$ with the following properties:

  • $\chi^T$ is not $DTIME(f (n))$-computable, where $\chi^T$ is the characteristic sequence of $T$ over $\{0\}^*$

  • $T \in\ DTIME(g(n))$, for some nondecreasing time-constructible time bound $g$ such that $g(n) \gt f (n)$.

  • strings in $T$ are very far one from another; more precisely, $T$ contains only strings of the form $0^{s(m)}$ , where $s$ is defined inductively by: $s(1) = 1;$ $s(m + 1) = g(s(m))$ [...]

(the proof proceeds by using the third property to show that it's easy/polynomial to create $\chi^{T \leq n}$ up to an uncertainty in exactly one hard bit, and so it is possible to consider two machines, one answering $1$ and an other answering $0$ for it, fulfilling the requirement of $\chi^T$ being in $C4[O(1), poly]$ )

This Theorem is claiming to build a tally set that does the job, and I understand how the proof proceeds, once it is taken granted that the set with the specified properties exists, but I don't see its existence proven. How do we know that such a tally set exists for every recursive $f(n)$ that is not $\mathrm{DTIME}(f(n))$ computable as specified in property (1)?

And why do we have to guarantee in property (2) that the $g(n)$ bound is greater or equal to $f(n)$?

Am I missing something basic about tally sets, or sets vs characteristic sequences in general, or some trivial hierarchy theorem?

by Attila Szasz at January 29, 2015 08:37 PM


What causes the call and put volatility surface to differ?

I currently have a local volatility model that uses the standard Black Scholes assumptions.

When calculating the volatility surface, what causes the difference between the call volatility surface, and the put surface?

by Jeffrey at January 29, 2015 08:34 PM



Humane guidance for sbt DSL

I have yet to need to do something beyond entirely trivial with sbt, and not find myself wasting a whole lot of time. The official documentation is story-like and cyclic, entirely not helpful for wrangling the DSL. The DSL, at large, is left undocumented other than its scaladoc. E.g. examine as a case in point.

Can someone recommend a humane tutorial or reference covering the topics of the last link, or alternatively, better yet, provide clear constructive descriptions for the following:

  1. Keys

  2. Settings

  3. Tasks

  4. Scopes

  5. Key operators and methods of the DSL relevant to the above entities/classes

  6. Key out-of-the-box objects that can be interacted with in the DSL

  7. How to define and call functions as opposed to ordinary scala code, and are .scala build definitions really deprecated?

  8. How to split build definitions over several files rather than having one huge long pile of code in build.sbt (or, how to structure a .sbt file that you can still read a month later).

  9. Multi project .sbt v.s. bare project .sbt - how do you tell the difference?

  10. Is there any ScalaIDE support for sbt files?

Please focus only on sbt 0.13.x, as everything else is getting old...

by matt at January 29, 2015 08:22 PM

Parsing a torrent file

I'm trying to read in a .torrent file with the following code:

val source =, "utf-8")
val lines = source.mkString

But when I run my program I get the following exception:

Exception in thread "main" java.nio.charset.MalformedInputException: Input length = 1

I have tried putting no charset and get the same issue.

What is the problem and what should the charset be for reading a .torrent file?

by jcm at January 29, 2015 08:17 PM

Run mysql script from network location

I currently have a mysql sever installed in a Jail on my freesnas machine. I can connect to the server via MySql Workbench locally or SSH to the NAS, log into the Jail and then the mysql server.

Most of my sql scripts are saved on a networked drive that is accessible via my NAS.

Im trying to run these scripts via my second means of access. However I get, source error 2, when using the \. command followed by the path.

Obviously there is a problem with passing a file (or the path) on a network drive through to a remote connected to service.

Any ideas would be appreciated.

by Greg K at January 29, 2015 08:15 PM

Scala 2.11 Macros - Parse JDBC row to case class

I'm trying to make a macro for my Play Framework project (2.3.7). I'm trying to make it easy to map an anorm Row to a case class. When I try to compile the attached code, I am getting the following exception:

[error] ***:15: exception    during macro expansion:
[error] java.util.NoSuchElementException: None.get
[error]     at scala.None$.get(Option.scala:322)
[error]     at scala.None$.get(Option.scala:320)
[error]     at ***.ModelMacros$.reads(ModelMacros.scala:29)

import anorm._

import scala.reflect.macros.whitebox
import scala.language.experimental.macros

object ModelMacros {
  implicit def sqlMapper[T]: Row => T = macro reads[T]

  def reads[T: c.WeakTypeTag](c: whitebox.Context): c.Expr[Row => T] = {
    import c.universe._

    val t = weakTypeOf[T]
    val companion = t.typeSymbol.companion

    val fields = t.declarations.collectFirst {
      case m: MethodSymbol if m.isPrimaryConstructor ⇒ m

    val fromMapParams = { field =>
      val name =
      val decoded = name.decodedName.toString
      val returnType = t.decl(name).typeSignature

      val fixedName = camelToUnderscores(decoded)


    c.Expr[Row => T] {
        { row => $companion(..$fromMapParams) }

I do not understand what to do to get the generic T to reflect appropriately. Any help would be greatly appreciated. Thanks!

by Sean Freitag at January 29, 2015 08:08 PM


Sich dieser Maut-Geschichte zu nähern wirkt ja immer, ...

Sich dieser Maut-Geschichte zu nähern wirkt ja immer, als versuche man, einen Pudding an die Wand zu nageln. Außer vagem Geraune und unrealistisch erscheinende Zielvorstellungen gab es da nichts konkretes.

Bis jetzt. Denn die "Zeit" hat da mal ein paar Details freigeklagt.

January 29, 2015 08:01 PM



How to access my forms properties when validation fails in the fold call?

I have an action where my form posts too like:

def update() = Action { implicit request =>
     errorForm => {
        if(errorForm.get.isDefined) {
           val id = // runtime error during form post
     form => {
         val id =   // works fine!
         Ok("this works fine" +


When I do the above, I get an error:

[NoSuchElementException: None.get]

If there are no validation errors, the form post works just fine in the 'success' part of the fold call.

I need to get the id value in the error section, how can I do that?

by Blankman at January 29, 2015 07:53 PM



How to create a Play project in IntelliJ IDEA 14 Community Edition?

I am trying to create a Scala project in IntelliJ IDEA 14. As mentioned in IntelliJ IDEA's help, the Scala plugin already has support for Play 2.x.

I have installed the Scala plugin, and when I create a new project I can select Scala > Scala and Scala > SBT projects but there's no Scala > Play 2.x.

Are there any additional steps needed to make this available? I am using IDEA 14 Community Edition.

I have tried importing module to a Scala project using play-generated .impl file but IDE could not handle it well e.g. was finding errors in completely fine play! views.

by Somal Somalski at January 29, 2015 07:49 PM


Solving $Isomorphism$ using $AUTOM$ in polynomial time

Let $Iso$ be the language of all $<G,H>$ such that $G$ and $H$ are isomorphic, and $AUTOM$ be the language of all $G$'s such that $G$ has a non-trivial automorphism.

I'd like to show that, assuming $AUTOM$ can be solved in polynomial time, so can be $Iso$.

There is an obvious reduction $AUTOM \leq_P Iso$, so I thought that there is likely to be an opposite one $Iso \leq_P AUTOM$, and then I will be able to solve $Iso$ in polynomial time. The problem is that I couldn't think of one, and I thought maybe I can use $AUTOM$'s polynomial algorithm some other way(s) to solve $Iso$. I read in wikipedia that

For, G and H are isomorphic if and only if the disconnected graph formed by the disjoint union of graphs G and H has an automorphism that swaps the two components.

But the problem I see in here is that I only know I have a polynomial time algorithm for solving $AUTOM$, I cant edit it to check if the automorphism found swaps the two components!

I also read:

In fact, just counting the automorphisms is polynomial-time equivalent to graph isomorphism.

And I didnt understand why is this statement correct? And assuming it is, does it help me to solve $Iso$ using $AUTOM$? Or is there another way?

by TheEmeritus at January 29, 2015 07:49 PM


Why does IDEA mark certain parts of Play code red?

I'm pretty new to the Play framework. I've been trying to configure it so I can use it with IntelliJ Ultimate.

I use following:

  • IntelliJ Ultimate 14.03
  • Scala plugin for IntelliJ 1.2.1
  • Play Framework 2.3.7 (the one that works online 1,2MB)
  • Scala 2.11
  • JDK 1.7
  • Windows 7

My problem is all about the fact that i can't make the errors disappear. Below is a simple example. When I create something more complicated (mapping etc.) I get entire blocks of red (also it does not suggest any code for the more complicated code).

What I've tried to fix it: - deleting .idea folder and generating it again - cleaning sbt - generating a Play app from inside activator and also from IntelliJ - re-installing IntelliJ

This is how I create the app from inside IntelliJ

I'm new both to Scala and Play, but I've done some research and I didn't end up with working solution. The same project works on Eclipse, but I would like to stick with IntelliJ.

by Lukasz Pniewski at January 29, 2015 07:44 PM


CTL - model checking for formula $A [a \cup b]$

I'm trying to verify if the following model satisfy $A [a \cup b]$:

The model on which i want to verify the formula

The algorithm I'm using is taken from "Concepts, Algorithms, and Tools for Model Checking", Joost-Pieter Katoen. In particular I applied the SatAU part (previous calculation of the states in which a and b are valid):

SatAU algorithm

Intuitively the formula is verified in state $\{s_1, s_2, s_3, s_4\}$ (so the model does not satisfy the property because the set doesn't contain the other initial state $s_0$). But if I apply the algorithm above the only state that I get are $\{s_2, s_3, s_4\}$, because the condition to generate the new $Q$ set (row 7) says to take a state s only if it has a connection with EACH element of the old $Q$ set. Is it correct my interpretation of the algorithm? How can I get also $s_1$ from the algorithm?

by Fabrizio Duroni at January 29, 2015 07:34 PM


FreeBSD kernel nat or natd?

As I notice more often with FreeBSD, there are always plenty of ways that lead to some specific goal.

After figuring out which firewall I wanted (I choose ipfw) I now am completely insecure about which way to do Network Address Translation (NAT).

As I have discovered now, there are two ways to to NAT, I could use the kernel space ipfw nat or I could use the userspace natd.

The only one of these described in the FreeBSD handbook is natd.

What I would like to know is what the main differences are between these? Which one is more popular.

Off course I would also like to be able to fish, so how I can find out these differences in the manuals/handbooks?

by Peter Smit at January 29, 2015 07:28 PM



Prove transitivity of big-O notation

I'm doing a practice question (not graded HW) to understand mathematical proofs and their application to Big O proofs. So far, however, the very first problem in my text is stumping me wholly.

Suppose $f(n) = O(g(N))$ and $g(n) = O(h(n))$ (all functions are positive).

Prove that $f(n) = O(h(n))$.

I am having lots of trouble with this, and it would be greatly helpful if someone showed me how to do this.

by STC at January 29, 2015 07:20 PM




Noch mehr Einsichten vom griechischen Finanzminister:„Was ...

Noch mehr Einsichten vom griechischen Finanzminister:
„Was auch immer Deutschland sagt oder tut, es muss in jedem Falle bezahlen“, sagte Varoufakis gegenüber der französischen Zeitung „La Tribune“.
Soweit offensichtlich.
Es sei schade, aber die vielen Milliarden von Deutschland und anderen „Eurorettern“ seien ohnehin verloren, „in einem schwarzen Loch von Schulden“, sagte Varoufakis kürzlich in einem Fernsehgespräch.
Die ganze Währungsunion sei völlig falsch konstruiert, sagte Varoufakis „La Tribune“, und das sei Schuld der Franzosen, die mit der Währungsunion die Hand auf deutsche Währungsreserven legen wollten, um über ihre Verhältnisse zu leben
Aha, jetzt wird es interessant! Ach wisst ihr, ich könnte den ganzen Artikel hier zitieren. Jede einzelne Äußerung von dem Mann ist Gold wert. Der sagt auch, dass Griechenland die Eurozone nicht verlassen könne, und selbst wenn sie wollten, ginge das frühestens in ein paar Monaten, und wenn man das jetzt ansagen würde, dann würde eine fette Kapitalflucht einsetzen und alles wäre noch schlimmer.

Er findet übrigens auch nicht, dass Deutschland sich den Wohlstand erarbeitet hat. Er findet, dass die Amerikaner aus strategischen Gründen in Deutschland investiert und damit die Industrie finanziert hätten.

Der Mann betreibt übrigens auch ein Blog, von dem ich bisher nur Gutes gehört habe.

January 29, 2015 07:01 PM





error: not found: value PlayScala

Trying to compile a project containing Play Framework as a subproject I receive this error:

~/my-project/web/build.sbt:8: error: not found: value PlayScala
lazy val `web` = (project in file(".")).enablePlugins(PlayScala)


name := "my-project"

version := "1.0"

scalaVersion := "2.11.5"

lazy val `my-project` = (project in file("."))
  .aggregate( web)

lazy val web = project


logLevel := Level.Warn


name := "web"

version := "1.0"

scalaVersion := "2.11.1"

lazy val `web` = (project in file(".")).enablePlugins(PlayScala)

libraryDependencies += "org.scalatest" % "scalatest_2.11" % "2.2.1" % "test"

libraryDependencies ++= Seq( jdbc , anorm , cache , ws )

unmanagedResourceDirectories in Test <+=  baseDirectory ( _ /"target/web/public/test" ) 


logLevel := Level.Warn

resolvers += "Typesafe repository" at ""

addSbtPlugin("" % "sbt-plugin" % "2.3.7")

The error goes away when I copy web/project/plugins.sbt to project/plugins.sbt

This is not what I want since web is a subproject and PlayScala is a dependency of the subproject alone.

by BAR at January 29, 2015 06:56 PM


SEC 13F Security List has incorrect CUSIP numbers?

I'm building database of 13F forms with 13F security lists ( Along with integrity checks.

I implemented CUSIP digit check algorithm to check if I'm getting correct CUSIP numbers and don't mess anything while parsing PDFs.

I found that half of CUSIP numbers on 13F Security Lists are incorrect - checksum digit is different than computed by algorithm checksum (algorithm described here:

For example, take 3rd page of where you can find line:


We read CUSIP from line as D18190908. Now, check this against available on-line CUSIP validator (i.e.: and you'll get that this CUSIP is INVALID!

From algorithm I'm getting checksum digit equals to 6 so CUSIP should got as D18190906. Check this modified CUSIP against on-line validator and you'll see replacing 8 with 6 makes CUSIP valid.

I've done some more research to check if it's not some one time error and found that 50% of CUSIPS in 10 most recent 13F Security Lists are invalid the same way as described in example.

Have you faced this issue? Whats wrong with 13F Security Lists?

by omikron at January 29, 2015 06:53 PM


clojure xml parsing slow when dtd used

I'm using to parse xml. Unfortunately the xml being sent back from the server is malformed because it includes escaped unicode and special characters and no dtd. I get around this by manually inserting

<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Transitional//EN\" \"\">"

into the xml, however when I do this the parsing time goes from <1 second to over 15.

So far I've turned off validation by passing :validating false to the parse function, however this is suboptimal. Is there a way to speed this up?

Edit: An example of a document sent:

<?xml version="1.0" encoding="utf-8"?>
    <title>A book &mdash Title</title>
    <synopsis>A long-winded, multi-paragraph synopsis with unicode</synopsis>

Error: XMLStreamException ParseError at [row,col]:[30,267] Message: The entity "mdash" was referenced, but not declared. (

by Nick at January 29, 2015 06:47 PM



Build file format not correct, attepting to integrate scalariform autoformatter

I'm attempting to implement scalariform autoformatter for Scala.

Here is build.sbt :

  import com.typesafe.sbt.SbtScalariform._

import scalariform.formatter.preferences._

name := "MyApp"


ScalariformKeys.preferences := ScalariformKeys.preferences.value
  .setPreference(RewriteArrowSymbols, true)
  .setPreference(AlignParameters, true)
  .setPreference(AlignSingleLineCaseStatements, true)
  .setPreference(PlaceScaladocAsterisksBeneathSecondAsterisk, true)
  .setPreference(MultilineScaladocCommentsStartOnFirstLine, true)

plugins.sbt :

addSbtPlugin("com.typesafe.sbt" % "sbt-scalariform" % "1.3.0")

but when I perform sbt reload command receive error :

 error: object typesafe is not a member of package com
import com.typesafe.sbt.SbtScalariform._

Is build file format correct ?

Update : This appears to be a proxy issue : SBT does not seem to use my proxy settings.

by blue-sky at January 29, 2015 06:36 PM


Why Halting problem is Recursively Enumerable?

If we take this definition as R.E. set definition (Computability, Complexity and Languages book written by Davis in page 79)

$Definition.$The set $B\subseteq N$ is called r.e. if there is partially computable function $g(x)$ such that

$B = \{x \in N | g(x) \downarrow\}$

Halting problem is set of $(x,y)$ which program with number $y$ halts on $x$, I really can't understand when there is no program for Halting problem then what $g(x)$ is going to be applied for it?

by Drupalist at January 29, 2015 06:33 PM

Is Universality Theorem applicable to Halting problem?

This is Universality theorem In the Computability, Complexity and Languages book written by Davis in page 70:

If $\phi^{(n)}(x_1,...,x_n,y) = \psi_P(x_1,...,x_n)$        $where$  #$(P) = y$

Theorem 3.1 (Universality Theorem): for each $n>0$ the function $\phi^{(n)}(x_1,...,x_n,y)$ is partially computable.

From what I understand of this theorem, the universal program is more like an interpreter it takes $x_1,...,x_n$ as input and $y$ as a program number and simulates what program with number $y$ does on the $x_i$s.

Since there is no program for halting problem so there is no $y$ for it to be passed to $\phi^{(n)}(x_1,...,x_n,y)$ so I think this theorem is not applicable to halting problem.Is it right?

by Drupalist at January 29, 2015 06:33 PM


Coming Soon – AWS SDK for Go

My colleague Peter Moon wrote the guest post below and asked me to get it out to the world ASAP!

— Jeff;

AWS currently offers SDKs for seven different programming languages – Java, C#, Ruby, Python, JavaScript, PHP, and Objective C (iOS), and we closely follow the language trends among our customers and the general software community. Since its launch, the Go programming language has had a remarkable growth trajectory, and we have been hearing customer requests for an official AWS SDK with increasing frequency. We listened and decided to deliver a new AWS SDK to our Go-using customers.

As we began our research, we came across aws-go, an SDK from Stripe. This SDK, principally authored by Coda Hale, was developed using model-based generation techniques very similar to how our other official AWS SDKs are developed. We reached out and began discussing possibly contributing to the project, and Stripe offered to transfer ownership of the project to AWS. We gladly agreed to take over the project and to turn it into an officially supported SDK product.

The AWS SDK for Go will initially remain in its current experimental state, while we gather the community’s feedback to harden the APIs, increase the test coverage, and add some key features including request retries, checksum validation, and hooks to request lifecycle events. During this time, we will be developing the SDK in a public GitHub repository at We invite our customers to follow along with our progress and join the development efforts by submitting pull requests and sending us feedback and ideas via GitHub Issues.

We’d like to thank our friends at Stripe for doing an excellent job with starting this project and helping us bootstrap this new SDK.

Peter Moon, Senior Product Manager

by Jeff Barr at January 29, 2015 06:32 PM


Unresolved dependency SBT 0.13.0 after update

Please have a look on the comments to be up to date.

Update SBT to 0.13.0:

I have a couple of projects written with scala 2.10.2 and build with sbt 0.12.4. As my OS is Ubuntu I used the SBT.deb package for installation of sbt 0.12.4. Everything fine. I built my projects with sbt.

Yesterday I wanted to update sbt to version 0.13.0. I downloaded and installed the new .deb package. The projects configuration has not been changed.

The failure:

When runnging SBT after the update I get this failure:

$ sbt
Loading /usr/share/sbt/bin/sbt-launch-lib.bash
Getting org.scala-sbt sbt 0.13.0 ...

:: problems summary ::
        module not found: org.scala-sbt#sbt;0.13.0

    ==== local: tried



        ::          UNRESOLVED DEPENDENCIES         ::


        :: org.scala-sbt#sbt;0.13.0: not found


unresolved dependency: org.scala-sbt#sbt;0.13.0: not found
Error during sbt execution: Error retrieving required libraries
  (see /home/myUser/.sbt/boot/update.log for complete log)
Error: Could not retrieve sbt 0.13.0

The ~/.sbt/update.log file is available here: The ~/.sbt/boot/.update.log file is available here:

How do I fix this dependency resolution?


  1. Other people had similar problems like this, but not the same. I don't think this is a problem of build definition incompatibility, do you? As far as I can see, SBT does not get to the point to read the project definition.

  2. From where does this file should be retrieved? Shouldn't it be included in the SBT installation package? Also it looks like SBT / Ivy does only look inside the local Ivy repo. There is no SBT artifact with version 0.13.0 in the Maven Central Repository. Do I have to specify another repo or something?

  3. And what about the scala version? Shoulnd't it be specified in the dependency definition? Do I have to specify the scala version somewhere?

Project configuration:

File: build.sbt:

name := "MyProject"

version := "1.0-SNAPSHOT"

organization := "myOrg"

scalaVersion := "2.10.2"

libraryDependencies += "com.github.nscala-time" %% "nscala-time" % "0.4.2"

File: project/plugins.sbt:

addSbtPlugin("com.typesafe.sbteclipse" % "sbteclipse-plugin" % "2.3.0")

File: project/

Prior to this problem I did not have this file. I added it trying to solve this problem:


by user573215 at January 29, 2015 06:27 PM


Greek letters in python 2.7?

I am trying to make a graph in python and I need to bottom axis to have lowercase mu in the label right now it's: plt.xlabel("Wavelength [micrometers]") but I would really like it to be an actual lowercase mu, and I haven't been able to find a way to do this. Does anybody know how I might do this?

submitted by greenwizardneedsfood
[link] [4 comments]

January 29, 2015 06:21 PM


The expression example to be extended in Scala

I wanted to check different way how I can extend this code and then to mix this extensions of functionalities at the end as they are needed.

    // Initial object algebra interface for expressions: integers and addition
    trait ExpAlg[E] {
        def lit(x : Int) : E 
        def add(e1 : E, e2 : E) : E

    // An object algebra implementing that interface (evaluation)

    // The evaluation interface
    trait Eval {
        def eval() : Int

    // The object algebra
    trait EvalExpAlg extends ExpAlg[Eval] {
        def lit(x : Int) = new Eval() {
            def eval() = x

        def add(e1 : Eval, e2 : Eval) = new Eval() {
            def eval() = e1.eval() + e2.eval()

    // Evolution 1: Adding subtraction
    trait SubExpAlg[E] extends ExpAlg[E] {
        def sub(e1 : E, e2 : E) : E

    // Updating evaluation:
    trait EvalSubExpAlg extends EvalExpAlg with SubExpAlg[Eval] {
        def sub(e1 : Eval, e2 : Eval) = new Eval() {
            def eval() = e1.eval() - e2.eval()

    // Evolution 2: Adding pretty printing
    trait PPrint {
        def print() : String

    trait PrintExpAlg extends ExpAlg[PPrint] {
      def lit(x: Int) = new PPrint() {
        def print() = x.toString()
      def add(e1: PPrint, e2: PPrint) = new PPrint() {
        def print() = e1.print() + "+" + e2.print()

    trait PrintSubExpAlg extends PrintExpAlg with SubExpAlg[PPrint] {
      def sub(e1: PPrint, e2: PPrint) = new PPrint() {
        def print() = e1.print() + "-" + e2.print()

object OA extends App {

trait Test extends EvalSubExpAlg with PrintSubExpAlg //error

Currently I am getting an error saying that : "illegal inheritance; trait Test inherits different type instances of trait SubExpAlg: pack.SubExpAlg[pack.PPrint] and pack.SubExpAlg[pack.Eval]"

How I can put two types Eval and PPint under a "hat" to be recognized as types from the same family or is not a right solution while still I may have conflicting inheritance between members of two types then?


I changed it like in the following: class Operations

// Initial object algebra interface for expressions: integers and addition
    trait ExpAlg {
        type Opr <: Operations
        def lit(x : Int) : Opr 
        def add(e1 : Opr, e2 : Opr) : Opr

    // An object algebra implementing that interface (evaluation)

    // The evaluation interface
    trait Eval extends Operations {
        def eval() : Int

    // The object algebra
    trait EvalExpAlg extends ExpAlg {
        type Opr = Eval
        def lit(x : Int) = new Eval() {
            def eval() = x

        def add(e1 : Eval, e2 : Eval) = new Eval() {
            def eval() = e1.eval() + e2.eval()

    // Evolution 1: Adding subtraction
    trait SubExpAlg extends ExpAlg {
        def sub(e1 : Opr, e2 : Opr) : Opr

    // Updating evaluation:
    trait EvalSubExpAlg extends EvalExpAlg with SubExpAlg {
        def sub(e1 : Eval, e2 : Eval) = new Eval() {
            def eval() = e1.eval() - e2.eval()

    // Evolution 2: Adding pretty printing
    trait PPrint extends Operations {
        def print() : String

    trait PrintExpAlg extends ExpAlg {
      type Opr = PPrint
      def lit(x: Int) = new PPrint() {
        def print() = x.toString()
      def add(e1: PPrint, e2: PPrint) = new PPrint() {
        def print() = e1.print() + "+" + e2.print()

    trait PrintSubExpAlg extends PrintExpAlg with SubExpAlg {
      def sub(e1: PPrint, e2: PPrint) = new PPrint() {
        def print() = e1.print() + "-" + e2.print()

object OA extends App {

class Test extends EvalSubExpAlg
class Test2 extends PrintSubExpAlg

val evaluate = new Test
val print = new Test2
val l1 = evaluate.lit(5)
val l2 = evaluate.lit(4)
val add1 = evaluate.add(l1, l2).eval()
val print1 = print.add(print.lit(5), print.lit(4)).print()


The only thing that I was asking probably was to use only one Test class and to navigate between methods of both types (through referencing those types).

by Val at January 29, 2015 06:20 PM

Scala type inference working with Slick Table

Have such models (simplified):

case class User(id:Int,name:String)
case class Address(id:Int,name:String)

Slick (2.1.0 version) table mapping:

class Users(_tableTag: Tag) extends Table[User](_tableTag, "users") with WithId[Users, User] {`
  val id: Column[Int] = column[Int]("id", O.AutoInc, O.PrimaryKey)
trait WithId[T, R] {
  this: Table[R] =>
  def id: Column[Int]

Mixing trait WithId I want to implement generic DAO methods for different tables with column id: Column[Int] (I want method findById to work with both User and Address table mappings)

trait GenericSlickDAO[T <: WithId[T, R], R] {
  def db: Database

  def findById(id: Int)(implicit stk: SlickTableQuery[T]): Option[R] = db.withSession { implicit session =>
    stk.tableQuery.filter( === id).list.headOption

trait SlickTableQuery[T] {
  def tableQuery: TableQuery[T]

object SlickTableQuery {
  implicit val usersQ = new SlickTableQuery[Users] {
    val tableQuery: Table Query[Users] = Users

The problem is that findById doesn't compile:

Error:(13, 45) type mismatch; found : Option[T#TableElementType] required: Option[R] stk.tableQuery.filter( === id).list.headOption

As I see it T is of type WithId[T, R] and at the same time is of type Table[R]. Slick implements the Table type such that if X=Table[Y] then X#TableElementType=Y.

So in my case T#TableElementType=R and Option[T#TableElementType] should be inferred as Option[R] but it isn't. Where am I wrong?

by ka4eli at January 29, 2015 06:16 PM

How do I find the min() or max() of two Option[Int]s in Scala?

How would you find minValue below? I have my own solution but want to see how others would do it.

val i1: Option[Int] = ...
val i2: Option[Int] = ...
val defaultValue: Int = ...
val minValue = ?

by Graham Lea at January 29, 2015 06:12 PM

Unit testing a side effecting actor with Akka

Given an actor with the following receive method definition:

def receive = {
  case SendEmail ⇒ {
    val body = prepareMessage(config.getString("source"), config.getString("template"))
    mailer.send(body, System.getenv("mailuser"), List(""), None, "stuff")
    listener.foreach(_ ! SendEmail)"hey") //This line does get executed.
    self ! PoisonPill

and a test class (Written with Scalacheck.) with the definition:

class EmailActorSpec extends ActorSpec {
  behavior of "An EmailActor"
  it should "send an email" in {
    val mailMock = new TestMailAccessor
    val props = Props(new EmailActor(Some(testActor), mailMock))
    val emailer = system.actorOf(props, "EmailActor")
    emailer ! SendEmail
    val result = expectMsgType[SendEmail](100 millis)

Note that ActorSpec mixes in the following trait:

trait StopSystemAfterAll extends BeforeAndAfterAll {
    this: TestKit with Suite ⇒
    override protected  def afterAll() {

It also includes the implicitSender trait from Akka TestKit I end up with the following error:

[INFO] [01/26/2015 13:49:27.287] [] [akka://testsystem/user/EmailActor] Message [$] from Actor[akka://testsystem/user/EmailActor#1885968450] to Actor[akka://testsystem/user/EmailActor#1885968450] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.

How do I setup the tests so that when testing for message reception they actually pass?

by Kevin at January 29, 2015 06:10 PM



Big Double in Scala templates and Play Framework

I'm using Play Framework 2.3.6 and Scala

When I'm trying to display input with big Double i.e. 55 000 000 it displays in input 5.5E7

@inputText(field("price"), '_label -> "Price")

<input type="text" id="price" name="price" value="5.5E7">

How can I change default formatting or somehow display it properly?

by JMichal at January 29, 2015 05:55 PM


What does semi-decidability mean?

I've come across the term "semi-decidability" in my logic course but I don't understand it. I have tried to find a clear explanation but without succes. Can someone explain it please?

by Michiel at January 29, 2015 05:53 PM




Thinking in Scala

Hey everyone,

I have been trying to learn scala for the past couple weeks (coming from a python background) and have realized that I don't exactly understand the structure a scala program is supposed to have.

As an exercise, I am redoing assignments from a bioinformatics course I took a year ago (that was in python) and I cannot even get past the first basic problem which is: Parse a large text file and have a generator function return (header, sequence) tuples. I wrote a non-rigorous solution in python in a couple minutes:

I know that you can parse a file with Source.fromFile.getlines(), but I can't figure out how I'm supposed to solve the problem in scala. I just can't wrap my head around what a "functional" solution to this problem looks like.

Thanks and apologies if this isn't an appropriate question for this sub.

EDIT: Wow, amazing feedback from this community. Thank you all so much!

submitted by Demonithese
[link] [14 comments]

January 29, 2015 05:46 PM



Extracting Raw JSON as String inside a Spray POST route

I've a POST Spray route and the request contains a JSON body (content-type "application/json"). I want a way to extract the raw JSON from this request inside my route.

For http://host:port/somepath/value1 I want to extract the post body as TextMsgResponse. But for http://host:port/somepath/value2 I want to extract the post body just as a raw Json (e.g., { "name":"Jack", "age":30 }

val myRoute = path("somepath" / Segment) { pathSegment => 
post {   //use only POST requests
  pathSegment match {
    case "value1" =>
      entity(as[TextMsgResponse]) { textMsg =>
        complete {
          //do something with the request
    case "value2" => { 
       //here is I want to extract the RAW JSON from the request          

by Soumya Simanta at January 29, 2015 05:25 PM

soft timeout crashes main instance of Scala program using Scala^Z3

I'm using Scala^Z3 for using Z3 within Scala. Now for some experiments I'm doing, which involves solving problems which become very complex, in order to cancel the current calculation.

I have tried a soft timeout, which from the documentation sounded like the perfect option for me. I used it like this:

config.setParamValue("SOFT_TIMEOUT", "5200")

However, instead of just canceling the calculation, it crashes my whole Scala program with the error message "Error: invalid usage".

I've tried to use concurrency like Futures in order to prevent the main program to die, but then I can't use Z3 in my program anymore until I restart it, because I get the error message "Error: invalid usage" immediately.

Is there something I misunderstood about soft timeouts?

Thanks in advance!

Yours, Stefan Tiran

by Stefan Tiran at January 29, 2015 05:24 PM


CS Team Project

Right so, doing a CS team project.

Can anyone give me a list of files I should prepare to protect myself up in the case of dead weight members..

Thanks in advance.

submitted by HandsomeJesus
[link] [13 comments]

January 29, 2015 05:23 PM

Fred Wilson

Another Tweetstorm Rant

I got fed up yesterday with seeing this on my phone all the time

facebook notification

That red notification next to the Facebook app is basically permanent because it is about messages that I need to download FB messenger to receive and clear the notification. I love notifications, they are the primary way I navigate my phone, and I am just a little bit OCD about clearing them. But I don’t use FB messenger. I use iMessage with my family, Kik with USV folks and a few others, and SMS via iMessage for the rest. So I’ve avoided downloading FB Messenger because I don’t need yet another messenger on my phone.

Yesterday afternoon I ran out of patience after seeing a new notification, clicking on it, only to find it was yet again a prompt to download FB Messenger. I decided to rant a little bit on Twitter and fired off the following tweetstorm:

ts #1TS #2TS #3

I am a big fan of the “constellation” approach to mobile apps. Google does it well. Dropbox does it well. Facebook does it well. I think its a trend that will continue because less is more in the mobile app user experience and app developers and the mobile operating systems are making it easier to seamlessly move from app to app, like what happens on the web already.

But there is a bridge too far and I think using mobile notifications to force someone to download an app they really don’t want, just to clear the damn notification, is exactly that.

I’m hoping users and developers reject this approach. I’m afraid they won’t because it has worked so well for Facebook.

by Fred Wilson at January 29, 2015 05:16 PM


Defining a function in Environ (Clojure) and then using it in code

I would like to be able to define an anonymous function in my Lieningen project using Environ.

Here is what that part of the project file looks like:

{:env {:foo (fn [s]
                (count s))}}

Then in my code, I would like to use that function. Something like:

(-> "here are a few words"
    (env :foo))

And then to get the size of s.

by Neoasimov at January 29, 2015 05:11 PM



Is there a way to include type hints inside the clojure threading macro?

For example, as in the example here,

=> (-> "a b c " .toUpperCase (.replace "A" "X") (.split " ") first)
=> "X"

I'd like to be able to do something like

 => (-> ^String "a b c " .... etc etc 

to avoid the reflection penalties, esp. in interfacing with java code.

by Steve B. at January 29, 2015 05:08 PM


Constructing inequivalent binary matrices

I am trying to construct all inequivalent $8\times 8$ matrices (or $n\times n$ if you wish) with elements 0 or 1. The operation that gives equivalent matrices is the simultaneous exchange of the i and j row AND the i and j column. eg. for $1\leftrightarrow2$ \begin{equation} \left( \begin{array}{ccc} 0 & 0 & 0 \\ 0 & 1 & 1 \\ 1 & 0 & 0 \end{array} \right) \sim \left( \begin{array}{ccc} 1 & 0 & 1 \\ 0 & 0 & 0 \\ 0 & 1 & 0 \end{array} \right) \end{equation}

Eventually, I will also need to count how many equivalent matrices there are within each class but I think Polya's counting theorem can do that. For now I just need an algoritmic way of constructing one matrix in each inequivalence class. Any ideas?

by Heterotic at January 29, 2015 05:08 PM

books in the field of fractal and multifractal analysis?

I am interested in learning the field of fractal and multifractal analysis, but from the point of view of Computer Science. The books that I manage to get, apart of the classic Strogatz book, were too mathematical heavy and with little explanation about of applications from a computational view. I would like to know which books could be recommended for start in this area?


by Layla at January 29, 2015 05:05 PM


Haha, der neue griechische Finanzminister rockt! Der ...

Haha, der neue griechische Finanzminister rockt! Der hat den Begriff "fiscal waterboarding" geprägt für den bisherigen Umgang mit Griechenland. Was für eine großartige Bezeichnung! Damit ist auf einen Schlag klar, was er sagen will, die Schäuble-Politik ist treffend beschrieben, und die Gegenseite ist in der argumentativen Defensive.

January 29, 2015 05:01 PM


Looking for the log file formatting of Mariadb

My team and I are working on a infrastructure of multiple servers involving MariaDB and as of recently Fluentd. For purposes of logging I need to (parse?) the MariaDB log files using regex so it can be read and catalogued by Fluentd. I'm hoping to find an example of the default slow-query.log and general.log from a Mariadb server, it might be a silly request but I haven't been able to find a local example nor has google turned up anything useful.

by Aaron Pederson at January 29, 2015 04:55 PM


Dinitz with whole number capacities

I am learning Dinitz algorithm and I have found the following problem in its applications:

Prove that Dinitz algorithm on a network with whole number capacities of at most O(1) will run in time O(nm) where n is number of vertices and m is number of edges.

I have no idea how to even start this proof. Could you please give me some hints?

by jentakproradost5 at January 29, 2015 04:54 PM


Determining if $G$ contains $K_4$ as a minor in polynomial time

I am trying to devise an algorithm for determining if an undirected graph $G$ contains $K_4$ as a minor. I was able to show in a previous problem how to test for $K_{2,3}$ by looking at all pairs of vertices and applying Menger's problem as a black-box on that pair of vertices (i.e checking for at least 3 vertex-disjoint paths). This gave an, although naive, $O(n^3)$ algorithm. However, the same trick doesn't seem to work for $K_4$, since the same type of structure cannot be exploited. How can one test for $K_4$?

I am also wondering what the best possible algorithm for $K_{2,3}$ and $K_4$ testing might be.... I seriously doubt that the brute-force approach highlighted above is optimal. Is there a known lower-bound?

by Lester X at January 29, 2015 04:50 PM


Packages 101 ?

Hi, I am getting a hang of emacs and after my basics I am moving on to packages. The book I am learning from is based on version 23 and is outdated on customization and packages.

I've read around and it's only getting more complicated. Last time I messed with my .emacs file I had to start all over because I didn't know what I was doing.

I have no programming knowledge. I am using emacs to write screenplays (fountain-mode) and prose. I am already using org-mode and liking it.

Questions :

  1. What is a package and how does it work? What is happening when you install a package?

  2. I understand that .emacs file stores all your customization. Do packages have any impact on the .emacs file?

  3. How do you manage(delete, edit, move) packages?

  4. Using ELPA, MELPA, Marmalade. Which one am I suppose to use? All?

  5. Anything else I missed that would be useful?

submitted by curious-scribbler
[link] [10 comments]

January 29, 2015 04:40 PM


Zend Server 8 – New Monitoring and Performance Tools

Late last week I met with Andi Gutmans and Michel Gerin of Zend Technologies. Because Andi is a self-described “coffee snob,” we headed directly to the nearby Dilettante Cafe for an in-depth chat. It was interesting to hear how they had grown from an organization focused on the PHP language to one that was taking on the more broad mission of scalability, monitoring, and visibility in to the run-time state of web and mobile applications that happened to be built using PHP. In fact, the only mention of PHP came when I remarked that we had spent no time discussing it. Andi spent more time telling me about his quest for the perfect microfoam than he did about language features!

Zend Server Update
We discussed their work on Zend Server, including the freshly released Version 8. As I described in a post that I wrote last year, Zend Server (and the crucial Z-Ray technology) gives developers access to in-content feedback on the behavior of the application that they are building, testing, or running.  Z-Ray provides developers with information about page requests, execution time, peak memory usage, events, PHP errors & warnings, SQL query execution, and variables.

I learned that Zend Server also has a number of other features that help applications run quickly and efficiently. These were not the subject of our chat and I didn’t take good notes, but we talked about code & data caching, job queues, and job scheduling. We also discussed cluster management and some new AWS integration. Zend Server can now be launched via AWS CloudFormation. It even includes a CloudFormation template generator to make this process simpler and totally repeatable:

Zend Server knows how to deploy code from Amazon Simple Storage Service (S3) (it can also pull from Git or deploy Zend Packages, also known as ZPKs).

Z-Ray Demo
Andi was eager to demo the newest Z-Ray features for me. He fired up his laptop (a stylish MacBook Air), got the Wi-Fi password from the barista, and connected to his demo instance.  He explained to me that they created an extensibility model for Z-Ray and used it to create a series of extensions for popular applications and frameworks. Each extension has intimate knowledge of the programming model, data structures, and database queries built and referenced by the associated environment.  This intimacy allows each extension to display the most important elements of each environment in a manner that will be familiar and comfortable to developers who are already versed in the environment.

Out of the box (to use that tired term left over from the by-gone era of shrink-wrap software), Z-Ray includes extensions for the Magento, Drupal, and WordPress application platforms. It also includes extensions for the Zend Framework, Symfony, and Laravel application frameworks. These extensions are available from the Official Zend Server Extensions repo on GitHub. Here’s an example of Z-Ray in action. It is aware that it is accessing the database queries initiated by a WordPress application and the display is customized accordingly:

The extension API is open and documented. Third-party (non-Zend) developers have already created extensions for other environments including Doctrine 2 (read more about the extension).

Zend Server on AWS
The Developer and Professional editions of Zend Server are available on the AWS Marketplace and you can launch a free trial of either one with a couple of clicks:


Both editions include a bunch of features that are intended to make Zend Server mesh smoothly with existing AWS environments and applications. Here are some of the features:

  • A new, JSON-based format for the EC2 user data that is passed to each newly launched instance. This data is used to configure the Zend Server.
  • A Z-Ray extension for the AWS API.
  • Custom script actions on startup.
  • Control over the dissemination of AWS access and secret keys to instances.
  • Control over cluster membership.

To learn more about these features, watch Zend’s new video, Getting Started with Zend Server and Z-Ray on AWS.

We wrapped up our meeting, recycled our mugs (this is Seattle, after all), and they headed back to Sea-Tac airport for their flight back to Silicon Valley!


PS – I love to learn and write about cool uses of AWS. Please track me down (a search for ‘contact jeff barr’ is a good start) and let me know when you are coming to Seattle!

by Jeff Barr at January 29, 2015 04:35 PM


Flattening nested entities with ids in Clojure

Lets say I define an entity (with nested entity) in the form:

{:id 1
 :a 7
 :b "Bob"
 :c {:id 2
     :d 9}

I would like to convert this to a vector of vectors of the form [[id key value]] e.g.

  [1 :a 7]
  [1 :b "Bob"]
  [2 :d 9]
  [1 :c 2]

I think it's going to have to be some sort of recursive algorithm that branches depending on the type of the value, but I can't quite get it to work.

Has anyone done something like this before?

Any advice would be greatly appreciated,


by Sigmoidal at January 29, 2015 04:34 PM

Spark Streaming: StreamingContext doesn't read data files

I'm new in Spark Streaming and I'm trying to getting started with it using Spark-shell. Assuming I have a directory called "dataTest" placed in the root directory of spark-1.2.0-bin-hadoop2.4.

The simple code that I want to test in the shell is (after typing $.\bin\spark-shell):

import org.apache.spark.streaming._
val ssc = new StreamingContext(sc, Seconds(2))
val data = ssc.textFileStream("dataTest")
println("Nb lines is equal to= "+data.count())
data.foreachRDD { (rdd, time) => println(rdd.count()) }

And then, I copy some files in the directory "dataTest" (and also I tried to rename some existing files in this directory).

But unfortunately I did not get what I want (i.e, I didn't get any outpout, so it seems like ssc.textFileStream doesn't work correctly), just some things like:

15/01/15 19:32:46 INFO JobScheduler: Added jobs for time 1421346766000 ms
15/01/15 19:32:46 INFO JobScheduler: Starting job streaming job 1421346766000 ms
.0 from job set of time 1421346766000 ms
15/01/15 19:32:46 INFO SparkContext: Starting job: foreachRDD at <console>:20
15/01/15 19:32:46 INFO DAGScheduler: Job 69 finished: foreachRDD at <console>:20
, took 0,000021 s
15/01/15 19:32:46 INFO JobScheduler: Finished job streaming job 1421346766000 ms
.0 from job set of time 1421346766000 ms
15/01/15 19:32:46 INFO MappedRDD: Removing RDD 137 from persistence list
15/01/15 19:32:46 INFO JobScheduler: Total delay: 0,005 s for time 1421346766000
ms (execution: 0,002 s)
15/01/15 19:32:46 INFO BlockManager: Removing RDD 137
15/01/15 19:32:46 INFO UnionRDD: Removing RDD 78 from persistence list
15/01/15 19:32:46 INFO BlockManager: Removing RDD 78
15/01/15 19:32:46 INFO FileInputDStream: Cleared 1 old files that were older tha
n 1421346706000 ms: 1421346704000 ms
15/01/15 19:32:46 INFO ReceivedBlockTracker: Deleting batches ArrayBuffer()

Did I forget some things ? thanks !

by Mohammed Gh at January 29, 2015 04:31 PM



problem related to backoff strategy [on hold]

I tries to solve this problem using p-persistent approach but do not find from where to start .P-persistent used in csma/cd to send the packet in a shared transmission medium so that collision between packet can be minimized.

A and B are the only two station in an ethernet. Each has a steady queue of frame to send. Both A and B tries to transmit a frame and collide and A wins the first backoff race. At the end of this successful transmission by A both A and B tries to transmit and again collide. What is the probability that A again wins the backoff race.

by ugan jha at January 29, 2015 04:30 PM



Scalding: Add trait from separate file

I have several scalding jobs that contain a bunch of constants and a few functions that are consistent across all the jobs. When I need to make a change to one of those, I don't want to change it in 5 different places. I wanted to create a trait that would store those things but I am having trouble referencing/importing the trait into my job.

So I have a file called constants.scala that contains:

 trait constants {a bunch of stuff defined here}

In one of my jobs, called myJob.scala I try to define a class like this:

class TrxnAmts(args : Args) extends Job(args) with constants {
    All my other code goes here

I try to run myJob in HDFS adding constants.scala to the classpath using the command:

scalding/scripts/scald.rb --hdfs --cp /path/to/constants.scala /path/to/myJob.scala

constants.scala appears in the classpath but nothing in the trait is recognized. How do I make this work? Do I need to compile constants.scala and reference the class or compile it into a jar first? Is there a better way to go about this?

Not very experienced with OOP so hopefully I'm not asking a really basic/obvious question.


by J Calbreath at January 29, 2015 04:28 PM


Fibonacci words

I came across the following problem in my old Czech algorithm textbook, sadly came with no hints or solution.

"We define Fibonacci words as $F_{0}=a$, $F_{1}=b$, $F_{n+2}=F_{n}F_{n+1}$, where $a$ and $b$ are general letters. How in a given string (over a potentially large alphabet) can you find the longest Fibonacci's sub-word in linear time?"

I know a solution in quadratic time, but can't reduce it to linear. Can anyone point me to the right direction?

by Fanda at January 29, 2015 04:23 PM



Extract specific JSON field with Scala and Argonaut

I'm trying to parse json with Scala and Argonaut.

Suppose I get JSON response from other side REST service and I don't know order of json's fields in that response and the number of fields. For example, returns JSON with five fields, but I want to extract only one field and drop others:

   "Accept-Language": "ru-RU,ru;q=0.8,en-US;q=0.5,en;q=0.3",
   "Host": "",
   "Referer": "",
   "User-Agent": "Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Firefox/31.0 Iceweasel/31.4.0",
   "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"

So I'm write some code:

object Service {

  case class Address(headers: String, rest: Option[String])

  implicit def AddressCodecJson: CodecJson[Address] =
    casecodec2(Address.apply, Address.unapply)("headers", "rest")

  def parse(data: String): Option[Address] = {

  def main(args: Array[String]): Unit = {
    val src = url("")
    val response: Future[String] = Http(src OK as.String)

    response onComplete {
      case Success(content) => {
        val userData: Option[Address] = parse(content)
        println(s"Extracted IP address = ${userData.get.headers}")
      case Failure(err) => {
        println(s"Error: ${err.getMessage}")

But of course this code doesn't work, probably because answers from jsontest doesn't compare with Address case class.

I get this error message:

java.util.NoSuchElementException: None.get
    at scala.None$.get(Option.scala:313)
    at scala.None$.get(Option.scala:311)
    at micro.api.Service$$anonfun$main$1.apply(Service.scala:26)
    at micro.api.Service$$anonfun$main$1.apply(Service.scala:23)
    at scala.concurrent.impl.ExecutionContextImpl$$anon$3.exec(ExecutionContextImpl.scala:107)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.pollAndExecAll(
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(

How can I get only one field specified by its name?

by John Ostin at January 29, 2015 04:12 PM






What does it mean for a function $f\colon M → N$ between *any* sets $M, N$ to be computable?

In our lecture notes on lamdba calculus, I encountered the sentence:

Let $M$ be a set and $f\colon ℕ → M$ be computable.

Does this even make sense? Don’t we need aditional structure on $ℕ$ and $M$ to talk about computability and decidability? How can I make sense of this statement?

by k.stm at January 29, 2015 03:49 PM


Is it possible for two securities to have the same first 8 characters of a cusip, but differ in the check sum?

CUSIP is a 9 character long identifier. The last digit is a check sum checking the first 8 previous characters. This seems to me that it is not possible for two securities to have the same 8 characters and different check sum. I know this is not exactly quantitative, but I haven't found a forum where I could ask.


by PBD10017 at January 29, 2015 03:39 PM



Lower bounds for space with some probability of error

There is an information theoretic lower bound of $\log_2 {U \choose x}$ for the number of bits to represent a subset of $x$ elements chosen from a universe of size $U$. We can in principle use this representation (perhaps inefficiently) as a data structure to test if any query is part of this subset.

How can you show a similar information theoretic lower bound if we are happy to have false positives with some probability $p$?

by eleanora at January 29, 2015 03:26 PM


Why does Anorm throw a TypeDoesNotMatch exception when inserting a Primary Key as Text?

I have a table in Postgres 9.4 that has email address as the Primary Key. Using Anorm, I then carry out the following

 DB.withConnection { implicit connection =>
  SQL"insert into member_login_email(email, password) values ($email, $password)".executeInsert()

When this is executed, the correct values are entered into the table, but a TypeDoesNotMatch runtime exception is thrown:

    at play.api.Application$class.handleError(Application.scala:296) ~[play_2.11-2.3.7.jar:2.3.7]
    at play.api.DefaultApplication.handleError(Application.scala:402) [play_2.11-2.3.7.jar:2.3.7]
    at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$3$$anonfun$applyOrElse$4.apply(PlayDefaultUpstreamHandler.scala:320) [play_2.11-2.3.7.jar:2.3.7]
    at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$3$$anonfun$applyOrElse$4.apply(PlayDefaultUpstreamHandler.scala:320) [play_2.11-2.3.7.jar:2.3.7]
    at [scala-library-2.11.1.jar:na]
Caused by: java.lang.RuntimeException: TypeDoesNotMatch(Cannot convert class java.lang.String to Long for column ColumnName(,Some(email)))
    at scala.sys.package$.error(package.scala:27) ~[scala-library-2.11.1.jar:na]
    at anorm.Sql$.anorm$Sql$$as(Anorm.scala:472) ~[anorm_2.11-2.3.7.jar:2.3.7]
    at anorm.Sql$class.executeInsert(Anorm.scala:350) ~[anorm_2.11-2.3.7.jar:2.3.7]
    at anorm.SimpleSql.executeInsert(Anorm.scala:190) ~[anorm_2.11-2.3.7.jar:2.3.7]
    at repository.MemberLoginEmailRepository$$anonfun$create$1.apply(MemberLoginEmailRepository.scala:17) ~[classes/:na]

It seems that Anorm is expecting Primary Keys to be of type Long. Is there anyway of getting Anorm to accept a Primary Key of type Text without throwing an exception?

I've looked at the source code for Anorm but have been struggling to see where this is actually happening.

by Arthur at January 29, 2015 03:25 PM


DFA for every run of a's=2 or 3

I am trying to create a dfa for L={w: every run of a's has length either two or three}

this is my attempt at the solution..i feel like I am missing something..?

enter image description here

by matt mowris at January 29, 2015 03:08 PM


how to connect to remote rabbitmq using clojure

I am using langohr to connect rabbitmq, if I don't specify any connection string, it works well and it connects to local server, but I want to connect to remote server. So I have the following code. I am using connection string of amqp://bigdata:bigdata@s1:5672, s2 is the hostname of the remote server.

let [            a (println rabbitmq-url)
            rmq-conn  (rmq/connect {:uri rabbitmq-url})
            a (println rabbitmq-url)]

But it throws the error of the following

$ lein run
Exception in thread "main", compiling:(/tmp/form-init589039011205967992.clj:1:71)
    at clojure.lang.Compiler.load(
    at clojure.lang.Compiler.loadFile(
    at clojure.main$load_script.invoke(main.clj:274)
    at clojure.main$init_opt.invoke(main.clj:279)
    at clojure.main$initialize.invoke(main.clj:307)
    at clojure.main$null_opt.invoke(main.clj:342)
    at clojure.main$main.doInvoke(main.clj:420)
    at clojure.lang.RestFn.invoke(
    at clojure.lang.Var.invoke(
    at clojure.lang.AFn.applyToHelper(
    at clojure.lang.Var.applyTo(
    at clojure.main.main(
Caused by:
    at com.rabbitmq.client.impl.AMQChannel.wrap(
    at com.rabbitmq.client.impl.AMQChannel.wrap(
    at com.rabbitmq.client.impl.AMQChannel.exnWrappingRpc(
    at com.rabbitmq.client.impl.AMQConnection.start(
    at com.rabbitmq.client.impl.recovery.RecoveryAwareAMQConnectionFactory.newConnection(
    at com.rabbitmq.client.impl.recovery.AutorecoveringConnection.init(
    at com.rabbitmq.client.ConnectionFactory.newConnection(
    at com.rabbitmq.client.ConnectionFactory.newConnection(
    at com.novemberain.langohr.Connection.init(
    at langohr.core$connect.invoke(core.clj:93)
    at clojurewerkz.testcom.core$create_message_from_database.invoke(core.clj:33)
    at clojurewerkz.testcom.core$create_message_from_database_loop.invoke(core.clj:53)
    at clojurewerkz.testcom.core$_main.doInvoke(core.clj:60)
    at clojure.lang.RestFn.invoke(
    at clojure.lang.Var.invoke(
    at user$eval5.invoke(form-init589039011205967992.clj:1)
    at clojure.lang.Compiler.eval(
    at clojure.lang.Compiler.eval(
    at clojure.lang.Compiler.load(
    ... 11 more
Caused by: com.rabbitmq.client.ShutdownSignalException: connection error
    at com.rabbitmq.utility.ValueOrException.getValue(
    at com.rabbitmq.utility.BlockingValueOrException.uninterruptibleGetValue(
    at com.rabbitmq.client.impl.AMQChannel$BlockingRpcContinuation.getReply(
    at com.rabbitmq.client.impl.AMQChannel.privateRpc(
    at com.rabbitmq.client.impl.AMQChannel.exnWrappingRpc(
    ... 27 more
Caused by: Connection reset
    at com.rabbitmq.client.impl.Frame.readFrom(
    at com.rabbitmq.client.impl.SocketFrameHandler.readFrame(
    at com.rabbitmq.client.impl.AMQConnection$

by Daniel Wu at January 29, 2015 03:08 PM

How do I create a Scala sequence of expressions?

Another question lead me to the need to create a sequence of Scala expressions. I seem to be unable to do that.

I have a SchemaRDD object z:

org.apache.spark.sql.SchemaRDD =
SchemaRDD[0] at RDD at SchemaRDD.scala:103
== Query Plan ==
== Physical Plan ==
ParquetTableScan [event_type#0,timestamp#1,id#2,domain#3,code#4], (ParquetRelation ...., Some(Configuration: core-default.xml, core-site.xml, yarn-default.xml, yarn-site.xml, mapred-default.xml, mapred-site.xml, hdfs-default.xml, hdfs-site.xml), org.apache.spark.sql.SQLContext@e7f91e, []), []

and I want to project it on two columns. select should be the answer: _
res19: Seq[org.apache.spark.sql.catalyst.expressions.Expression] => org.apache.spark.sql.SchemaRDD = <function1>

However, I seem to be unable to generate a Seq[Expression], e.g.:'event_type,'code))
<console>:21: error: type mismatch;
 found   : Seq[Symbol]
 required: org.apache.spark.sql.catalyst.expressions.Expression

<console>:21: error: type mismatch;
 found   : Symbol
 required: org.apache.spark.sql.catalyst.expressions.Expression

I thought that a symbol was an expression...

so, how do I invoke select?

by sds at January 29, 2015 03:08 PM

Scala reflection, finding and instantiating all classes with a given annotation

I want use reflection to find, at runtime, all classes that have a given annotation, however I can't work out how to do so in Scala. I then want to get the value of the annotation and dynamically instantiate an instance of each annotated class mapped to the value of the associated annotation.

Here's what I want to do:

package problem
import scala.reflect.runtime._

object Program {

  case class Foo (key: String) extends scala.annotation.StaticAnnotation

  case class Bar ()
  @Foo ("x")
  case class Bar0 extends Bar
  @Foo ("y")
  case class Bar1 extends Bar
  @Foo ("z")
  case class Bar2 extends Bar

  def main (args : Array[String]): Unit = {

    // I want to use reflection to build
    // the following dynamically at run time:
    // val whatIWant: Map [String, Bar] =
    //   Map("x" -> Bar0 (), "y" -> Bar1 (), "z" -> Bar2 ())
    // (it's a map of attribute key -> an instance
    // of the type that has that attribute with that key)
    val whatIWant: Map [String, Bar] = ?

And, in the hope of being able to explain myself better, here's how I would solve the problem in C#.

using System;
using System.Linq;
using System.Reflection;
using System.Collections.Generic;

namespace scalaproblem
    public class FooAttribute : Attribute
        public FooAttribute (String s) { Id = s; }
        public String Id { get; private set; }

    public abstract class Bar {}

    [Foo ("x")]
    public class Bar0: Bar {}

    [Foo ("y")]
    public class Bar1: Bar {}

    [Foo ("z")]
    public class Bar2: Bar {}

    public static class AttributeExtensions
        public static TValue GetAttributeValue<TAttribute, TValue>(this Type type, Func<TAttribute, TValue> valueSelector) 
            where TAttribute : Attribute
            var att = type.GetCustomAttributes (typeof(TAttribute), true).FirstOrDefault() as TAttribute;
            if (att != null)
                return valueSelector(att);
            return default(TValue);

    public static class Program
        public static void Main ()
            var assembly = Assembly.GetExecutingAssembly ();
            Dictionary<String, Bar> whatIWant = assembly
                .Where (t => Attribute.IsDefined (t, typeof(FooAttribute)))
                .ToDictionary (t => t.GetAttributeValue((FooAttribute f) => f.Id), t => Activator.CreateInstance (t) as Bar);

            whatIWant.Keys.ToList().ForEach (k => Console.WriteLine (k + " ~ " + whatIWant [k]));

by Pooky at January 29, 2015 03:04 PM




How to add a method to Enumeration in Scala?

In Java you could:

public enum Enum {
    ONE {
        public String method() {
            return "1";
    TWO {
        public String method() {
            return "2";
    THREE {
        public String method() {
            return "3";

    public abstract String method();

How do you do this in Scala?

Thanks in advance, Etam.

by Etam at January 29, 2015 02:49 PM


How can I fill bookcases with shelves of books using the least number of bookcases?

Sorry for layman's term question, my background in computer science is weak.

What I have is a list of shelves with books of varying height. Each shelf stores a value that describes how many shelves (of that height) can fit on a bookcase. The bookcases have adjustable shelves so that we can maximize the usage of each one.

What I have been trying so far is sorting the shelves from shortest to tallest and filling in each bookcase, but what happens is that there are remainders which could be filled in with the small shelves.

I should mention that an additional complexity is that we want to place the tallest shelf on the top of the bookcase (because it would be more efficient).

I am not expecting a full on answer for how to do this... but I would very much appreciate a book or article on the topic. I just need a little direction and I think I would be able to pick it up.

Thanks for your help!

by Primalpat at January 29, 2015 02:40 PM




Functions are objects : Scala's tutorial cannot be complied

I am new to Scala. I followed the tutorial in

Here is the code :

object MyTimer {

  def oncePerSecond(callback: () => Unit){

    while(true) {callback(); Thread sleep 1000}

  def timeFiles(){
    println("times flies like an arrow")

  def main (args: Array[String]) {


My IDE is IntelliJ IDEA, and Scala is 2.11.5 (java is JDK 8).

The IDE shows the error : Application does not take parameters.

enter image description here

by ChenZhongPu at January 29, 2015 02:28 PM


6-coloring of a tree in a distributed manner

I have some difficulties in understanding distributed algorithm for tree 6 - coloring in $O(\log^*n)$ time.

The full description can be found in following paper: Parallel Symmetry-Breaking in Sparse Graphs. Goldberg, Plotkin, Shannon.

In short, the idea is ...

Starting from the valid coloring given by the processor ID's, the procedure iteratively reduces the number of bits in the color descriptions by recoloring each nonroot node $v$ with the color obtained by concatenating the index of a bit in which $C_v$ differs from $C_{parent}(v)$ and the value of this bit. The root $r$ concatenates $0$ and $C_r[0]$ to form its new color.

The algorithm terminates after $O(\log^*n)$ iterations.

I don' have the intuitive understanding why it's actually terminates in $O(\log^*n)$ iterations. As it's mentioned in the paper on the final iteration there is the smallest index where two bit string differs is at most 3. So 0th bit and 1th bit could be the same and $2^2=4$, so this two bit will give us 4 colors + another 2 colors for different 3th bit, and in total 8 colors and not 6 like in the paper, and why we cannot proceed further with 2 bits, it's still possible to find different bits and separate them.

I would appreciate a little bit deeper analysis of the algorithm than in the paper.

by fog at January 29, 2015 02:08 PM


Practical fault detection & alerting. You don’t need to be a data scientist

This post covers some common notions around better operational insights and alerting via fault and anomaly detection, debunks some of the myths that ask for over-complicated solutions, and provides some practical pointers that any programmer or sysadmin can implement without becoming a data scientist.
read more

by Dieter on the web at January 29, 2015 02:08 PM


How to extract patterns of inputs? [on hold]

I need to extract patterns. For exemple:

<input type="text" name="ex1">
<input maxlength="10" name="ex2">

Extracted file:

ex1: type=text
ex2: maxlength=10

How can I do it? what methods can I use?

by Paulo Costa at January 29, 2015 02:07 PM



Scala sbt multi/module projects: how to deal with relative paths?

This could be a silly question, but I haven't found a solution to this problem. I have a multi-module project with sub-projects (sub1, sub2). Here's a figure:

  +-- sub1
        +-- src/main/resources/
  +-- sub2

I have a cfg file in sub1 and I load it from a class defined in sub1 path using File("./src/main/resources/") and it works fine. If I move to sub2 path and I run the projects, I get a 'file not found' when accessing the class defined in sub1 that loads the file because the path is relative.

What is the best approach to solve this issue? I'd say that absolute paths could be a solution, but they are not portable to other positions in the file system nor other machines.

I suppose there are best practices to deal with this.

by Max at January 29, 2015 01:47 PM


Practical applications of parity games

Are there examples of practical applications of parity games, ie systems, in the real world, that can be represented as parity games ?

Usually related documentation on parity games has almost never a practical example of this application.

by Antony at January 29, 2015 01:46 PM


Unable to install sbt on NFS

I'm installing sbt on my cluster ,and I've installed an NFS file system.I followed

to install sbt manully.Then I usedsbt sbt-version to check,the errors came as follow: Function not implemented
at Method)
at java.nio.channels.FileChannel.tryLock(
at xsbt.boot.Locks$GlobalLock.withChannel$1(Locks.scala:86)
at xsbt.boot.Locks$GlobalLock.xsbt$boot$Locks$GlobalLock$$withChannelRetries$1(Locks.scala:78)
at xsbt.boot.Locks$GlobalLock$$anonfun$withFileLock$1.apply(Locks.scala:97)
at xsbt.boot.Using$.withResource(Using.scala:10)
at xsbt.boot.Using$.apply(Using.scala:9)
at xsbt.boot.Locks$GlobalLock.ignoringDeadlockAvoided(Locks.scala:58)
at xsbt.boot.Locks$GlobalLock.withLock(Locks.scala:48)
at xsbt.boot.Locks$.apply0(Locks.scala:31)
at xsbt.boot.Locks$.apply(Locks.scala:28)
at xsbt.boot.Launch.locked(Launch.scala:238)
at xsbt.boot.Launch$.run(Launch.scala:102)
at xsbt.boot.Launch$$anonfun$apply$1.apply(Launch.scala:35)
at xsbt.boot.Launch$.launch(Launch.scala:117)
at xsbt.boot.Launch$.apply(Launch.scala:18)
at xsbt.boot.Boot$.runImpl(Boot.scala:41)
at xsbt.boot.Boot$.main(Boot.scala:17)
at xsbt.boot.Boot.main(Boot.scala)
Error during sbt execution: Function not implemented

I'm using jdk7.

java version "1.7.0_71"
Java(TM) SE Runtime Environment (build 1.7.0_71-b14)
Java HotSpot(TM) 64-Bit Server VM (build 24.71-b01, mixed mode)

So how can i install sbt on NFS?


by Max yao at January 29, 2015 01:43 PM

Planet Emacsen

Irreal: Rewriting and Squashing Git Commits with Magit

I use Magit all the time and really like it but I don't know how to do much more than stage and commit changes. Sometimes I can even resolve a merge conflict but I always have to stumble through it. As a result, I'm always on the lookout for Magit tutorials that help me get better at using it.

My latest find is a post by Shingo Fukuyama on using Magit to rewrite git commit messages. Fukuyama has lots of screen shots to show you what you'll see as you follow the steps he lays out.

In the same vein, Howard Abrams has a similar tutorial on using Magit to squash commits together. The process is very much like the one that Fukuyama describes for rewriting commit messages. I really like articles like these; they help me extend my Magit knowledge in a relatively painless way.

by jcs at January 29, 2015 01:30 PM



How to choose a framework?

Until today i always choosed frameworks that i knew more(ok, this is a way to choose), but exists another things to be examined when lets create a project?

  • How explain: I always choosed the JQuery to my front-end projects, but, it would be a better choose?

  • I should have choosen AngularJS or other?

Anyway, I can not evaluate what tools should I use in certain situations.

by user3854612 at January 29, 2015 01:23 PM


significance of ~ symbol in matlab function [duplicate]

This question already has an answer here:

If I have a function a that accepts 2 parameters (double) in Matlab as follows

function [x,y] = a(z)

What does the symbol "~" do when the function is called with this handle as follows

[x,~,y] = a[10]


by Mechanic at January 29, 2015 01:19 PM

Split list into multiple lists with fixed number of elements in java 8

I want to something which is similar to the scala grouped function. Basically, pick 2 elements at a time and process them. Here is a reference for the same :

Split list into multiple lists with fixed number of elements

Lambdas do provide things like groupingBy and partitioningBy but none of them seem to do the same as the grouped function in Scala. Any pointers would be appreciated.

by vamosrafa at January 29, 2015 01:19 PM

"line number not found" error in confg/routes file in play2.3

my routes file in play gives this error

**error: cannot find symbol

In /home/smat/practical/Stocks/conf/routes (line number not found)**

here is my routes file

# Routes
# This file defines all application routes (Higher priority routes first)
# ~~~~

# Home page
GET     /                           controllers.Application.index

# Map static resources from the /public folder to the /assets URL path
GET     /assets/*file     "/public", file)

# stocks
GET     /stocks/add                 controllers.Stocks.add
POST    /stocks/save      
GET     /register                   controllers.Stocks.registerUser
POST    /register                   controllers.Stocks.registerUser

please help

by M.Ahsen Taqi at January 29, 2015 01:12 PM

Cannot specify return type - "

I faced this problem multiple times now:

//Returns ActorRef, compiles without problems
val helloActor = system.actorOf(Props[HelloActor], name = "helloactor")

    //Returns ActorRef, but this cannot be specified directly after the value.
    //"Not found: Type ActorRef"
    val helloActor: ActorRef = system.actorOf(Props[HelloActor], name = "helloactor")

Why can't I write val helloActor: ActorRef = ... if the return type is defined by the method actorOf? Of course, this isn't really important, but it would make the code more readable IMHO.

by TrudleR at January 29, 2015 01:08 PM


Efficient algorithms for vertical visibility problem

During thinking on one problem, I realised that I need to create an efficient algorithm solving the following task:

The problem: we are given a two-dimensional square box of side $n$ whose sides are parallel to the axes. We can look into it through the top. However, there are also $m$ horizontal segments. Each segment has an integer $y$-coordinate ($0 \le y \le n$) and $x$-coordinates ($0 \le x_1 < x_2 \le n$) and connects points $(x_1,y)$ and $(x_2,y)$ (look at the picture below).

We would like to know, for each unit segment on the top of the box, how deep can we look vertically inside the box if we look through this segment.

Formally, for $x \in \{0,\dots,n-1\}$, we would like to find $\max_{i:\ [x,x+1]\subseteq[x_{1,i},x_{2,i}]} y_i$.

Example: given $n=9$ and $m=7$ segments located as in the picture below, the result is $(5, 5, 5, 3, 8, 3, 7, 8, 7)$. Look at how deep light can go into the box.

Seven segments; the shaded part indicates the region which can be reached by light

Fortunately for us, both $n$ and $m$ are quite small and we can do the computations off-line.

The easiest algorithm solving this problem is brute-force: for each segment traverse the whole array and update it where necessary. However, it gives us not very impressive $O(mn)$.

A great improvement is to use a segment tree which is able to maximize values on the segment during the query and to read the final values. I won't describe it further, but we see that the time complexity is $O((m+n) \log n)$.

However, I came up with a faster algorithm:


  1. Sort the segments in decreasing order of $y$-coordinate (linear time using a variation of counting sort). Now note that if any $x$-unit segment has been covered by any segment before, no following segment can bound the light beam going through this $x$-unit segment anymore. Then we will do a line sweep from the top to the bottom of the box.

  2. Now let's introduce some definitions: $x$-unit segment is an imaginary horizontal segment on the sweep whose $x$-coordinates are integers and whose length is 1. Each segment during the sweeping process may be either unmarked (that is, a light beam going from the top of the box can reach this segment) or marked (opposite case). Consider a $x$-unit segment with $x_1=n$, $x_2=n+1$ always unmarked. Let's also introduce sets $S_0=\{0\}, S_1=\{1\}, \dots, S_n=\{n\}$. Each set will contain a whole sequence of consecutive marked $x$-unit segments (if there are any) with a following unmarked segment.

  3. We need a data structure that is able to operate on these segments and sets efficiently. We will use a find-union structure extended by a field holding the maximum $x$-unit segment index (index of the unmarked segment).

  4. Now we can handle the segments efficiently. Let's say we're now considering $i$-th segment in order (call it "query"), which begins in $x_1$ and ends in $x_2$. We need to find all the unmarked $x$-unit segments which are contained inside $i$-th segment (these are exactly the segments on which the light beam will end its way). We will do the following: firstly, we find the first unmarked segment inside the query (Find the representative of the set in which $x_1$ is contained and get the max index of this set, which is the unmarked segment by definition). Then this index $x$ is inside the query, add it to the result (the result for this segment is $y$) and mark this index (Union sets containing $x$ and $x+1$). Then repeat this procedure until we find all unmarked segments, that is, next Find query gives us index $x \ge x_2$.

Note that each find-union operation will be done in only two cases: either we begin considering a segment (which can happen $m$ times) or we've just marked a $x$-unit segment (this can happen $n$ times). Thus overall complexity is $O((n+m)\alpha(n))$ ($\alpha$ is an inverse Ackermann function). If something is not clear, I can elaborate more on this. Maybe I'll be able to add some pictures if I have some time.

Now I reached "the wall". I can't come up with a linear algorithm, though it seems there should be one. So, I have two questions:

  • Is there a linear-time algorithm (that is, $O(n+m)$) solving the horizontal segment visibility problem?
  • If not, what is the proof that the visibility problem is $\omega(n+m)$?

by mnbvmar at January 29, 2015 01:07 PM


Is there a way to implement constraints in Haskell's type classes?

Is there some way (any way) to implement constraints in type classes?

As an example of what I'm talking about, suppose I want to implement a Group as a type class. So a type would be a group if there are three functions:

class Group a where
    product :: a -> a -> a  
    inverse :: a -> a 
    identity :: a

But those are not any functions, but they must be related by some constraints. For example:

product a identity = a 
product a (inverse a) = identity
inverse identity = identity


Is there a way to enforce this kind of constraint in the definition of the class so that any instance would automatically inherit it? As an example, suppose I'd like to implement the C2 group, defined by:

 data C2 = E | C 

 instance Group C2 where
      identity = E 
      inverse C = C

This two definitions uniquely determines C2 (the constraints above define all possible operations - in fact, C2 is the only possible group with two elements because of the constraints). Is there a way to make this work?

by Rafael S. Calsaverini at January 29, 2015 01:03 PM


Bloomberg equity option volatility data

Using the Bloomberg open API, I am trying to program a C++ script that is able to download option volatility data from Bloomberg. I currently do not have access to Bloomberg, but in the coming week I will be able to access a Bloomberg terminal to try out my script.

In which field are the individual option volatilities saved? I know that the last price field would be PX_LAST or LAST_PRICE, but have no idea in which field / format the option volatility would be accessible.

by Olorun at January 29, 2015 01:01 PM



Weird delete key behaviour suddenly


Sorry for posting a help post here, but I'm the end of my thether now. After a recent system update (don't know exactly when this would have occured it's been quite some time since I updated) my delete key is no longer behaving the same.

Before now it would remove a single character in front of the caret. Now it (seemingly randomly) removes huge chunks of text and I have no idea why. I can't figure out any rhyme nor reason to the behaviour and can't seem to stop myself hitting delete to remove single characters.

My backspace key is working as it always did. I can only imagine this is some feature that totally makes sense and something has changed that means it's no longer working, but how can I force the delete key to behave again?

Thanks subreddit <3

submitted by FionaSarah
[link] [7 comments]

January 29, 2015 12:58 PM


Unable to use ScalaFormat with SublimeText 3

I'm using Sublime Text 3, when attempt to use ScalaFormat package (detailed at the option to Format is disabled :

enter image description here

I have tried repeated installations of the plugin as can be seen from the screenshot. To install I copied the ScalaFormat source into Sublime Text 3\Packages\User . It appears to have partly installed correctly as the context menu is displaying but why is the "Format" option disabled ?

by blue-sky at January 29, 2015 12:57 PM

Planet Theory

Reusing Data from Privacy

Vitaly Feldman gave a talk at Georgia Tech earlier this week on his recent paper Preserving Statistical Validity in Adaptive Data Analysis with Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold and Aaron Roth. This work looks at the problem of reuse of the cross-validation data in statistical inference/machine learning using tools from differential privacy.

Many machine learning algorithms have a parameter that specifies the generality of the model, for example the number of clusters in a clustering algorithm. If the model is too simple it cannot capture the full complexity of what it is learning. If the model is too general it may overfit, fitting the vagrancies of this particular data too closely.

One way to tune the parameters is by cross-validation, running the algorithm on fresh data to see how well it performs. However if you always cross-validate with the same data you may end up overfitting the cross-validation data.

Feldman's paper shows how to reuse the cross-validation data safely. They show how to get an exponential (in the dimension of the data) number of adaptive uses of the same data without significant degradation. Unfortunately their algorithm takes exponential time but sometimes time is much cheaper than data. They also have an efficient algorithm that allows a quadratic amount of reuse.

The intuition and proof ideas come from differential privacy where one wants to make it hard to infer individual information from multiple database queries. A standard approach is to add some noise in the responses and the same idea is used by the authors in this paper.

All of the above is pretty simplified and you should read the paper for details. This is one of my favorite kinds of paper where ideas developed for one domain (differential privacy) have surprising applications in a seemingly different one (cross-validation).

by Lance Fortnow ( at January 29, 2015 12:53 PM



validating optional fields with play framework

I've been searching for past half an hour but didn't found any solution or a page that actually describes validation of optional fields using play framework. Here's my form:

var myForm = Form(mapping(
    "id" -> optional(longNumber),
    "field" -> text
      .verifying("field is required", value=> value.length > 0),
    "heading" -> optional(text)
      .verifying("heading should be less than 50 characters", value=> value.length < 51) // Need something like this validation

When its optional(text) field then .verifying asks for Option[...].

The validation I want to perform is: If heading is present than check its length (max 50 characters) & If there is no heading then do nothing

I want to do the validation with field declared in mapping() not after declaring all field then validating together. If you can provide some links, that will also work. Thanks

by Sunil Kumar at January 29, 2015 12:30 PM



Which class of languages is accepted by PDA when we restrict the stack to logarithmic size?

Let $\mathrm{LOG}_{\mathrm{CF}}$ be the class of all languages recognized by a Pushdown-automaton that uses $\leq \log n$ cells of its stack for each input of length $n$.

Obviously, this class is a proper subset of the class of context-free languages. Which languages are in this class, and what (closure) properties does it have?

I have found this class in Harrison's Book:

I have searched a lot about iterated counter languages but I can't understand them well. I also I don't know whether this problem is what I am looking for or not.

I think if we have L1 and L2 in this class so we can have their union in this class by adding two lambda- transition.

And if we have a Pda A with logarithmic stack height , if we can construct an equivalent Pda B with the extra property that always clear all its stack symbols except the bottom-of-stack symble after every acceptance so we this class will be closed under Kleene- star

I will be grateful if anyone can explain me whether this class is closed under intersection and complement or not

I am still looking for just one non-regular-language that is in this class!!!

by Fatemeh Ahmadi at January 29, 2015 12:10 PM


How to pass -D parameter or environment variable to Spark job?

I want to change Typesafe config of a Spark job in dev/prod environment. It seems to me that the easiest way to accomplish this is to pass -Dconfig.resource=ENVNAME to the job. Then Typesafe config library will do the job for me.

Is there way to pass that option directly to the job? Or maybe there is better way to change job config at runtime?


  • Nothing happens when I add --conf "spark.executor.extraJavaOptions=-Dconfig.resource=dev" option to spark-submit command.
  • I got Error: Unrecognized option '-Dconfig.resource=dev'. when I pass -Dconfig.resource=dev to spark-submit command.

by kopiczko at January 29, 2015 12:09 PM

Scala punctuation (AKA symbols and operators)

I'm reading some Scala code, trying to learn the language and understand the code itself, and I keep coming across some unintelligible punctuation that does stuff. The problem is that it's pretty much impossible to search in any search engine for punctuation - it's all filtered out before the query gets processed.

This is compounded by the fact that I haven't found any single document that outlines all the insane shortcuts that Scala seems to have in an easy way.

Can you point me to, or better yet write, such a guide? Just a comment, document, HTML page or blog post with a list of punctuation and the thing it does. In particular, I'm confused about:


by 0__ at January 29, 2015 12:03 PM


The complexity of a multi-objective shortest path problem

I have the following shortest path problem.

Consider a directed graph with $n$ levels. Each level has $m$ nodes. Each node at level $i$ is connected to all nodes at level $i+1$. Let us also make a starting node that is connected to all nodes at level $1$ (the first level).

Each edge is labelled by a pair of non-negative integer weights. Each level $i$ has a single non-negative integer label which we call $L_i$.

The goal is to minimize the shortest path with respect to the first weight in each edge weight pair from the starting node to the last level while ensuring that the weight of the path from the starting node with respect to the second weight in each edge weight pair is no more than $L_i$ at each level $i$.

Has this problem been studied? Is it known to be NP-hard?

by eleanora at January 29, 2015 11:48 AM



Where to obtain Eurex level 2 historical order book data from?

What are some possible sources to obtain Eurex level 2 historical order book data from?

Unfortunately I have only been able to find 1 source - namely Eurex itself, which charges 2000 Euro/month for the last 3 months and 1500 Euro/month for older months.

That, however, gives you the entire exchange. I was hoping the cherry pick the instruments I needed, e.g. maybe only the 10y bund future and thus get to a more reasonable quote of a few hundred per month?

by Cookie at January 29, 2015 11:30 AM


How to run .clj file as a script using leningen?

This is the second question after Is there a standalone Clojure package within Leiningen?

For example, I have a file hello_world.clj, and I can run it using

java -cp clojure.jar clojure.main hello_world.clj.

Since lein already contains Clojure (because I can run lein repl directly), is there a way to do the same thing like

lein script hello_world.clj by lein?

by hanfeisun at January 29, 2015 11:23 AM

Clojure: how to tell if out is going to console or is being piped?

I'm writing a clojure cli and would like to know if there is a way to test if the out (i.e. println) is being written to a console or is being piped to another program?

This is similar to this question but for clojure.

by chris.wood at January 29, 2015 11:16 AM

Spark Upgrade from 1.1.1 to 1.2.0 does not run in local mode but searches for hadoop winutils

I recently upgraded my Spark / Scala application from 1.1.1 to 1.2.0. I usually run the application in local mode, but after the upgrade it keeps trying to store the RDDs under Hadoop. I suppose at least one of my dependencies is wrong, but which one?

Help is appreciated, as always!



libraryDependencies ++= Seq(
    "com.typesafe" % "config" % "1.0.2",

    "" %% "scala-io-file" % "0.4.2",

    "com.typesafe" %% "scalalogging-slf4j" % "1.1.0",
    "org.slf4j" % "jcl-over-slf4j" % "1.7.5",
    "ch.qos.logback" % "logback-classic" % "1.0.13",

    "org.specs2" %% "specs2-core" % "2.3.7" % "test",

    "org.apache.spark" %% "spark-core" % "1.2.0",

    "org.apache.spark" %% "spark-mllib" % "1.2.0", 

    "com.github.fommil.netlib" % "all" % "1.1.2",

    "com.novocode" % "junit-interface" % "0.11" % "test",

    "org.scalatest" % "scalatest_2.10" % "2.2.1" % "test"        

Java version: Java(TM) SE Runtime Environment (build 1.8.0_25-b18) Java HotSpot(TM) 64-Bit Server VM (build 25.25-b02, mixed mode)

Scala version: 2.10.4

I have no hadoop or MapR installed on my local machine.

(selected) program output:

2015/01/29 11:36:20 INFO [run-main-0] ...Main$ - Starting program...
2015/01/29 11:36:20 WARN [run-main-0] ...NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2015/01/29 11:36:21 INFO [run-main-0] ...Main$ - SPARK version 1.2.0 is starting...
2015/01/29 11:36:21 INFO [run-main-0] ...Main$ - SPARK config:
2015/01/29 11:36:21 INFO [run-main-0] ... Saving 'stuff' to file: XXX
2015/01/29 11:37:31 ERROR[run-main-0] o.a.h.u.Shell - Failed to locate the winutils binary in the hadoop binary path Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
    at org.apache.hadoop.util.Shell.getQualifiedBinPath( [hadoop-common-2.2.0.jar:na]
... (deleted)
    at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopDataset(PairRDDFunctions.scala:1074) [spark-core_2.10-1.2.0.jar:1.2.0]
    at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:940) [spark-core_2.10-1.2.0.jar:1.2.0]
    at org.apache.spark.rdd.SequenceFileRDDFunctions.saveAsSequenceFile(SequenceFileRDDFunctions.scala:79) [spark-core_2.10-1.2.0.jar:1.2.0]
    at org.apache.spark.rdd.RDD.saveAsObjectFile(RDD.scala:1181) [spark-core_2.10-1.2.0.jar:1.2.0]
... (deleted)

by Tim Malt at January 29, 2015 11:11 AM


Representation of procedural knowledge

I know that knowledge about relationships between things can be represented using ontologies and stored in some sort of file or database system.

Can a network of procedural knowledge also be created in this way? Such that complex algorithms can be defined and stored efficiently, translated into other languages and forms (such as finite state machines or machine language), changed, and form the basis for other AI axioms?

i.e. Procedural Reasoning Systems -- how would a Knowledge Area (KA) be represented as a cognitive primitive in a computer system?

by Josh Wyant at January 29, 2015 11:08 AM


How to I idiomatically do Inversion of Control for routes in Play Framework

I'd like to know the idiomatic correct way to link two sub projects together

My current situation is that I am refactoring a big website (hundreds of .scala and .scala.html files) into a handful of smaller sub projects. Let's take a concrete example

There is a subproject that deals with Accounts, Groups and Membership. This sub project is mostly self contained. But on the group webpage, I want to put a button to allow a member of that group to do an action. In this concrete example 'Issue a document'.

There is a second subproject that declares this action. This subproject knows about Groups, but it is primarily concerned with issuing documents. There is a lot of logic and web pages around that. There are perhaps only two or three buttons in the 'Accounts, Groups and Membership' project that should end up coming into this project

Now if this was a Scala class dependency problem I have a wealth of tools. Typically I would use some form of inversion of control to allow a client class to inject the buttons.

Ideas I have explored

1: I can put hard coded routes into the application.conf. This is actually trivially easy, intention revealing (although a little obscure) but isn't type safe.

2: I can have a method on the controllers in the 'Accounts, Groups and membership' project that allows me to programatically add routes. This is doable but messy.

What is the idiomatically correct Play way to do it?

by Stave Escura at January 29, 2015 11:05 AM

Will zinc eat all my memory if running long enough time

Let's say I have a build machine, I start zinc as a long running process. It keeps building different scala projects. Will zinc finally use all my memories because it needs to cache every project?

by Chandler Zhang at January 29, 2015 10:56 AM



What is the fastest way to sum a collection in Scala

I've tried different collections in Scala to sum it's elements and they are much slower than Java sums it's arrays (with for cycle). Is there a way for Scala to be as fast as Java arrays?

I've heard that in scala 2.8 arrays will be same as in java, but they are much slower in practice

by Tala at January 29, 2015 10:45 AM

easy idiomatic way to define Ordering for a simple case class

I have a list of simple scala case class instances and I want to print them in predictable, lexicographical order using list.sorted, but receive "No implicit Ordering defined for ...".

Is there exist an implicit that provides lexicographical ordering for case classes?

Is there simple idiomatic way to mix-in lexicographical ordering into case class?

scala> case class A(tag:String, load:Int)
scala> val l = List(A("words",50),A("article",2),A("lines",7))

scala> l.sorted.foreach(println)
<console>:11: error: No implicit Ordering defined for A.

I am not happy with a 'hack':


by ya_pulser at January 29, 2015 10:43 AM

Why does complete'ing requests where JavaUUID is used lead to compilation error?

I'm developing a project with Akka and Spray. It works fine except GETs that are handled with the following route:

class UserRoot extends Directives with DefaultJsonFormats with PerRequestCreator {
  val route =
    path("users" / JavaUUID / "activities") { userId =>
      get {
        complete {
          createActorPerRequest(new StringUUID(userId), Props[LogicGetActivitiesFromUser])

  def createActorPerRequest(entity: Entity, target: Props): Route =
     context => perRequest(context, target, Work(entity))

The StringUUID class:

case class StringUUID(id: String) extends Entity // I also tried UUID instead of String

The code above errors with the following:

error: type mismatch;
 found   : java.util.UUID
 required: String
       createActorPerRequest(new StringUUID(userId), Props[LogicGetActivitiesFromUser])

But if I add import reflect.ClassTag (that I found this on the internet) it gives me another error:

error: could not find implicit value for evidence parameter of type spray.httpx.marshalling.Marshaller[spray.routing.RequestContext => Unit]
createActorPerRequest(new StringUUID(userId.toString), Props[LogicGetActivitiesFromUser])

Any idea?

by Sharekhan at January 29, 2015 10:16 AM


Heuristic for Tournament Scheduling

I am holding a bi-yearly tournament in my city, for which I want to write a program that gives me (nearly-)optimal pairings, and waiting time. The setup is as follows:

-Up to 42 groups of 2 persons each.
-3 groups will be paired to be one team
-a game is played between 2 teams (6 groups) and takes 20 minutes
-3 games will be played at the same time, there will be 12 rounds of 3 games
-Every group has the same amount of games
-after every game, the teams change
-What I want to optimize: 
 minimize the amount of times a group gets put in a team with another group 
 they have already played with (if they played against them it is ok)
 Bonus: Minimize the games a group has to wait until their next game 
 (Since only 3 games can be played at once (so 6x3 = 18 groups), there are a lot of 
 groups who have to wait for the next round of games.)

I want the program to give me the pairings of the groups and a schedule, if possible. I am giving the exact numbers, because if it can't be done in polynomial time, but the instance is small enough so exponential time does not matter too much, it is fine with me. The solution does not have to be optimal, but should be close.

What is a good algorithm or heuristic for my problem?

by RunOrVeith at January 29, 2015 10:07 AM


How to setup java on freebsd?

I have both Java JRE and Java JDK on a FreeBSD 7.2 box (running PFSense) from

find / -name gives me output like:

so I make a link to /usr/local/bin like so:

 ln /usr/local/diablo-jre1.6.0/bin/java /usr/local/bin/java

and now I get

# rehash
# java
Error: could not find
Error: could not find Java 2 Runtime Environment.

SOOOOOO, I'm wondering if there is some tool I can use to turn on a particular java vm similar to Ubuntus' /etc/jvm?

by Mark0978 at January 29, 2015 09:58 AM

Is an Option argument a code smell?

Suppose you see def foo(oa: Option[A]) = ... Doesn't the a argument of type Option[A] look like a code smell ?

Isn't it better to define foo(a: A) and use it as oa map {a => foo(a)} instead of foo(oa) ?

by Michael at January 29, 2015 09:56 AM



Can I change the linearization of types in Scala?

Is there a possibility to change the linearization order of types specially traits in Scala? I can guess that it may not be a safe choice but "is it possible"?

by Val at January 29, 2015 09:48 AM


Dealing with missing values in credit scoring

I am making a credit scorecard model based on financial ratios. Several of my variables (i.e. financial ratios) have missing values. What should I do about these?


1 After regressing, I will assign a score to each bin in each variable. I have heard about the possibility of the bin of missing values having higher score (which means less likely to default) than some of the other bins or even all the other bins.

2 A few have less than 10% missing, but some have 20-30%. Some have even higher %ages since they rely on data the year before (e.g. sales growth or net cash flow). What %age of missing values for a certain variable should I use s.t. above it, I might remove the variable entirely?

3 I heard you can incorporate missing values into 1 bin under some conditions. What conditions might these be (e.g. Less than 10% missing)?


by BCLC at January 29, 2015 09:40 AM


Proper way to inject dependencies into clustered persistent Akka actors?

I'm using Akka Persistence with Cluster Sharding. What is the proper way to provide dependencies into such PersistentActor-s?

As far as I understand, passing them as constructor arguments is not possible, as Cluster Sharding is creating these actors.

Using Spring/Guice/etc. is not idiomatic Scala (and possibly has other issues (?)).

Using an object to implement a singleton makes for cumbersome testing and seems bad style.

What is the proper way?

P.S. If you plan to suggest the Cake pattern, please provide sample code in this specific Akka Persistence Cluster Sharding context.

by John M at January 29, 2015 09:37 AM


Does anyone know where I can find a plain text document that lists hundreds of molecules in molecular form?

I need it for a chemistry game I am making. If y'all can't help me find such a list I'll make one and host it where it can be downloaded.

submitted by Edward_Campos
[link] [4 comments]

January 29, 2015 09:11 AM


implemented Iterable trait gives run-time exception in Scala

Rough Code:

// Test.scala
trait Test extends Iterable[MaxTest] with Closeable {


// AnotherClass.scala
class AnotherClass {
    def apply(Seq[() => Iterator[MaxTest]]) = {

// BaseClass.scala
trait BaseClass {
    def iterSeq: Iterable[Test]

    def toAnotherClass() = {
        AnotherClass({iter => {() => iter.iterator}}))

// DerivedClass.scala
trait DerivedClass extends BaseClass {

    def returnTestStream: Test

    override def iterSeq: Iterable[Test] = {

// ConcreteClass.scala
class ConcreteClass extends DerivedClass {

     private class ConcreteTest extends Test {

         override def iterator: Iterator[MaxTest] = new Iterator[MaxTest] {
               override def hasNext(): Boolean = {

               override def next() = {


    override def returnTestStream = {
        new ConcreteTest()


When I call,

val hello = ConcreteClass.toAnotherClass()

I'm getting

`cause: java.lang.incompatibleclasschangeerror: found interface but class was expected`.

Please help.

by Anish Shah at January 29, 2015 09:06 AM


Option Prices under the Heston Stochastic Volatility Model

I was wondering if anyone has come across a more straightforward derivation of the semi-closed form solution for the price of a european call under the Heston model than the one proposed by Heston (1993) ?

by dimebucker91 at January 29, 2015 09:04 AM


Why does IDEA mark classes even with the library showing up in External Libraries?

I'm installing the Mail plugin for my Play application, and after adding dependencies and running sbt dependencies and sbt update, in External Libraries play.libs.mailer.Email does show up. However, when I import it, Intellij Marks mailer as red, and if I just put play.libs.mailer.Email in code, Intellij marks Email as red, but not mailer.

Can anyone help me fix this issue?

by OneZero at January 29, 2015 08:59 AM

How to add "provided" dependencies back to run/test tasks' classpath?

Here's an example build.sbt:

import AssemblyKeys._




name := "scala-app-template"

version := "0.1"

scalaVersion := "2.9.3"

val FunnyRuntime = config("funnyruntime") extend(Compile)

libraryDependencies += "org.spark-project" %% "spark-core" % "0.7.3" % "provided"

sourceGenerators in Compile <+= buildInfo

buildInfoPackage := "com.psnively"

buildInfoKeys := Seq[BuildInfoKey](name, version, scalaVersion, target)

assembleArtifact in packageScala := false

val root =".")).
  settings(inConfig(FunnyRuntime)(Classpaths.configSettings ++ baseAssemblySettings ++ Seq(
    libraryDependencies += "org.spark-project" %% "spark-core" % "0.7.3" % "funnyruntime"
  )): _*)

The goal is to have spark-core "provided" so it and its dependencies are not included in the assembly artifact, but to reinclude them on the runtime classpath for the run- and test-related tasks.

It seems that using a custom scope will ultimately be helpful, but I'm stymied on how to actually cause the default/global run/test tasks to use the custom libraryDependencies and hopefully override the default. I've tried things including:

(run in Global) := (run in FunnyRuntime)

and the like to no avail.

To summarize: this feels essentially a generalization of the web case, where the servlet-api is in "provided" scope, and run/test tasks generally fork a servlet container that really does provide the servlet-api to the running code. The only difference here is that I'm not forking off a separate JVM/environment; I just want to manually augment those tasks' classpaths, effectively "undoing" the "provided" scope, but in a way that continues to exclude the dependency from the assembly artifact.

by user2785627 at January 29, 2015 08:56 AM



Need help with deterlab

I'm using a NS file provided to me and I've already SSH'd into users then into my machine but there's nothing here. My assignment says i need to find JPEGs on this server but there's nothing from what I see. I'm apparently missing something huge?

The ns file /share/education/LinuxDETERIntro_UCLA/intro.ns should be pulling information from this server and posting it onto my node system for me to screw with. It has some download files or something but they aren't showing up.

Edit: If you want to downvote at least leave some feedback.

submitted by ampaterson
[link] [3 comments]

January 29, 2015 08:53 AM


Joining 2 arrays together code improvement

There are 2 arrays. One that consists of Categories and one of Products. Each product pertains to its specific category. I want to join each product to its right category (a category can have multiple products). Each product that 'finds' its category will go in this category's products array.

Here's my code:

for ($i = 0; $i < count($prods); $i++)
    for ($u = 0; $u < count($cats); $u++)
        if ($prods[$i]['category_code'] === $cats[$u]['category_style_code'])
            if ( !isset($cats[$u]['products']) )
                $cats[$u]['products'] = array();
            array_push($cats[$u]['products'], $prods[$i]);

It results in something like:

    [0] => Array
            [id] => 1
            [category_style_code] => GA
            [products] => Array
                    [0] => Array
                            [id] => 1
                            [default_price] => 37.50
                            [category_code] => GA

                    [1] => Array
                            [id] => 2
                            [default_price] => 15.00
                            [category_code] => GA

Let's say there are many categories and many products... How would you optimize this code (or do it differently maybe there are PHP functions which could be used) ?

by Dany P at January 29, 2015 08:52 AM

How to get Scala tests in IDEA to use javaOptions from build.sbt?

I generated an idea project using play's "gen-idea" and then imported the project via the build.sbt file in intellij (using 14 Ultimate).

My build.sbt looks like this

name := "play"

version := "1.0"

lazy val `play` = (project in file(".")).enablePlugins(PlayScala)

scalaVersion := "2.11.1"

libraryDependencies ++= Seq( jdbc , anorm , cache , ws )

libraryDependencies += "org.sql2o" % "sql2o" % "1.3.0"

unmanagedResourceDirectories in Test <+=  baseDirectory ( _ /"target/web/public/test" )

unmanagedJars in Compile += file("app/lib/sqljdbc4.jar")

javaOptions += "-Djava.library.path=app/lib"

I'm trying to get the java library path to be part of the launcher (for either test run/debug, or just in general) and no matter what I try I can't seem to get it to pick up.

I can see that my unmanaged jars addition to the build.sbt did pick up in the final launcher command

"C:\Program Files\Java\jdk1.7.0_60\bin\java" -agentlib:jdwp=transport=dt_socket,address=,suspend=y,server=n -Dspecs2.ex=test -Dspecs2.ex=test -Dfile.encoding=UTF-8 -classpath "C:\Users\devshorts\.IntelliJIdea14\config\plugins\Scala\lib\scala-plugin-runners.jar;
C:\Program Files\Java\jdk1.7.0_60\jre\lib\charsets.jar;
C:\Program Files\Java\jdk1.7.0_60\jre\lib\deploy.jar;
C:\Program Files\Java\jdk1.7.0_60\jre\lib\javaws.jar;
C:\Program Files\Java\jdk1.7.0_60\jre\lib\jce.jar;
C:\Program Files\Java\jdk1.7.0_60\jre\lib\jfr.jar;
C:\Program Files\Java\jdk1.7.0_60\jre\lib\jfxrt.jar;
C:\Program Files\Java\jdk1.7.0_60\jre\lib\jsse.jar;
C:\Program Files\Java\jdk1.7.0_60\jre\lib\management-agent.jar;
C:\Program Files\Java\jdk1.7.0_60\jre\lib\plugin.jar;
C:\Program Files\Java\jdk1.7.0_60\jre\lib\resources.jar;
C:\Program Files\Java\jdk1.7.0_60\jre\lib\rt.jar;
C:\Program Files\Java\jdk1.7.0_60\jre\lib\ext\access-bridge-64.jar;
C:\Program Files\Java\jdk1.7.0_60\jre\lib\ext\dnsns.jar;
C:\Program Files\Java\jdk1.7.0_60\jre\lib\ext\jaccess.jar;
C:\Program Files\Java\jdk1.7.0_60\jre\lib\ext\localedata.jar;
C:\Program Files\Java\jdk1.7.0_60\jre\lib\ext\sunec.jar;
C:\Program Files\Java\jdk1.7.0_60\jre\lib\ext\sunjce_provider.jar;
C:\Program Files\Java\jdk1.7.0_60\jre\lib\ext\sunmscapi.jar;
C:\Program Files\Java\jdk1.7.0_60\jre\lib\ext\zipfs.jar;
C:\Program Files (x86)\JetBrains\IntelliJ IDEA 14.0.2\lib\idea_rt.jar" org.jetbrains.plugins.scala.testingSupport.specs2.JavaSpecs2Runner -s Foobar -testName test -showProgressMessages true -C org.jetbrains.plugins.scala.testingSupport.specs2.JavaSpecs2Notifier

Being new to scala development what am I doing wrong here?

by devshorts at January 29, 2015 08:38 AM

Ignore Scala Compile Errors Like Java

How can I skip scalac compile errors like one can with Java?

This causes the compiler to skip the erroneous file, compiling the rest, and leave the crash to run-time.

I use SBT but since it uses scalac, I think it would be a command line parameter for scalac that passes through from SBT.


My goal is to run the program, and have it fail at run-time when the error is reached.

For example, there can be a main GUI that has no errors with a button to start a server that does.

In Java the GUI will run and fail during run-time when the button to start server is clicked.

In Scala the program won't even run because the GUI file could not compile before the server file does.

by BAR at January 29, 2015 08:33 AM

What is a 'Closure'?

I asked a question about Currying and closures were mentioned. What is a closure? How does it relate to currying?

by Ben at January 29, 2015 08:25 AM

What are the relationships between Any, AnyVal, AnyRef, Object and how do they map when used in Java code?

I usually end up trying every combination until it compiles. Can somebody explain what I should use where?

by huynhjl at January 29, 2015 08:16 AM



Complexity of optimizing with respect to one criterion with bounds set by another

I would like to know any references for this scheduling problem. We will have two measures, time and cost in dollars. The idea is to minimize the cost in dollars while making sure that hard deadlines on the time are still met.

Consider tasks T_1, ... T_n and machines M_1, ... , M_m. Each task takes a certain amount of time to perform depending on the machine it is run on. A single task will be run on one machine from start to completion. There is no preemption and a task can't be moved from one machine to another while it is being processed.

Each task also costs a certain integer number of dollars to perform depending on the machine it is run on.

The tasks have to be done in strict order (they can't be done in parallel). So task T_1 must be performed first, our only choice is which machine to run it on. Then, once that is completed, we must decide which machine to run task T_2 on.

There is however a penalty in terms of both time and cost in dollars to changing machine. If we run task $i$ on the same machine as task $i-1$ there is no penalty. But if we run it on a different machine there is some integer time penalty and also some integer cost in dollars penalty. These switching penalties will be specified by two simple tables of non-negative integers, one for time and one for cost in dollars. The switching penalties (which can be different for time and cost in dollars) depend only on the machine being switched from and the machine being switched to.

Finally, each task has an integer deadline which must be met. This is just given as a list of deadlines.

The objective is to give an algorithm that minimizes the total cost in dollars to complete all the tasks while making sure all the tasks are completed by their deadlines. If this isn't possible we just report that it can't be done.

There is a straightforward pseudo-polynomial time dynamic programming solution.

Is this a known problem? Is it known to be NP-hard?

by Anush at January 29, 2015 08:12 AM

Planet Theory

More FOCS 2014-blogging

In the spirit of better late than never, some more updates from Amirali Abdullah from his sojourn at FOCS 2014. Previously, he had blogged about the higher-order Fourier analysis workshop at FOCS.

I'll discuss now the first official day of FOCS, with a quick digression into the food first: the reception was lovely, with some nice quality beverages, and delectable appetizers which I munched on to perhaps some slight excess. As for the lunches given to participants, I will think twice in future about selecting a kosher option under dietary restrictions. One hopes for a little better than a microwave instant meal at a catered lunch, with the clear plastic covering still awaiting being peeled off. In fairness to the organizers, once I decided to revert to the regular menu on the remaining days, the chicken and fish were perfectly tasty.

I will pick out a couple of the talks I was most interested in to summarize briefly. This is of course not necessarily a reflection of comparative quality or scientific value; just which talk titles caught my eye.

The first talk is "Discrepancy minimization for convex sets" by Thomas Rothvoss. The basic setup of a discrepany problem is this: consider a universe of $n$ elements, $[n]$ and a set system of $m$ sets ($m$ may also be infinite), $S = \{S_1, S_2, \ldots, S_m \}$, where $S_i \subset [n]$. Then we want to find a $2$-coloring $\chi : [n] \to \{-1, +1 \}$ such that each set is as evenly colored as possible. The discrepany then measures how unevenly colored some set $S_i \in S$ must be under the best possible coloring.

One fundamental result is that of Spencer, which shows there always exists a coloring of discrepancy $O(\sqrt{n})$. This shaves a logarithmic factor off of a simple random coloring, and the proof is non-constructive. This paper by Rothvoss gives the first algorithm that serves as a constructive proof of the theorem.

The first (well-known) step is that Spencer's theorem can be recast as a problem in convex geometry. Each set $S_i$ can be converted to a geometric constraint in $R^n$, namely define a region $x \in R^n : \{ \sum_{j \in S_i} | x_j | \leq 100 \sqrt{n} \}$. Now the intersection of these set of constraints define a polytope $K$, and iff $K$ contains a point of the hypercube $\{-1 , +1 \}^n$ then this corresponds to the valid low discrepancy coloring.

One can also of course do a partial coloring iteratively - if a constant fraction of the elements can be colored with low discrepancy, it suffices to repeat.

The algorithm is surprisingly simple and follows from the traditional idea of trying to solve a discrete problem from the relaxation. Take a point $y$ which is generated from the sphercial $n$-dimensional Gaussian with variance 1. Now find the point $x$ closest to $y$ that lies in the intersection of the constraint set $K$ with the continuous hypercube $[-1, +1]^n$. (For example, by using the polynomial time ellipsoid method.) It turns out some constant fraction of the coordinates of $x$ are actually tight(i.e, integer valued in $\{-1, +1 \}$) and so $x$ turns out to be a good partial coloring.

To prove this, the paper shows that with high probability all subsets of $[-1 +1]^n$ with very few tight coordinates are far from the starting point $y$. Whereas with high probability, the intersection of $K$ with some set having many tight coordinates is close to $y$. This boils down to showing the latter has sufficiently large Gaussian measure, and can be shown by standard tools in convex analysis and probabilitiy theory. Or to rephrase, the proof works by arguing about the isoperimetry of the concerned sets.

The other talk I'm going to mention from the first day is by Karl Bringmann on the hardness of computing the Frechet distance between two curves. The Frechet distance is a measure of curve similarity, and is often popularly described as follows: "if a man and a dog each walk along two curves, each with a designated start and finish point, what is the shortest length leash required?"

The problem is solvable in $O(n^2)$ time by simple dynamic programming, and has since been improved to $O(n^2 / \log n)$ by Agarwal, Avraham, Kaplan and Sharir. It has long been conjectured that there is no strongly subquadratic algorithm for the Frechet distance. (A strongly subquadratic algorithm being defined as $O(n^{2 -\delta})$ complexity for some constant $\delta$, as opposed to say  $O(n^2 / polylog(n))$.)

The work by Bringmann shows this conjecture to be true, assuming SETH (the Strongly Exponential Time Hypothesis), or more precisely that there is no $O*((2- \delta)^N)$ algorithm for CNF-SAT. The hardness result holds for both the discrete and continuous versions of the Frechet distance, as well as for any $1.001$ approximation.

The proof works on a high level by directly reducing an instance of CNF-SAT to two curves where the Frechet distance is smaller than $1$ iff the instance is satisfiable. Logically, one can imagine the set of variables are split into two halves, and assigned to each curve. Each curve consists of a collection of "clause and assignment" gadgets, which encode whether all clauses are satisfied by a particular partial assignment. A different such gadget is created for each possible partial assignment, so that there are $O*(2^{N/2})$ vertices in each curve. (This is why solving Frechet distance by a subquadratic algorithm would imply a violation of SETH.)

There are many technical and geometric details required in the gadgets which I won't go into here. I will note admiringly that the proof is surprisingly elementary. No involved machinery or complexity result is needed in the clever construction of the main result; mostly just explicit computations of the pairwise distances between the vertices of the gadgets.

I will have one more blog post in a few days about another couple of results I thought were interesting, and then comment on the Knuth Prize lecture by the distinguished Dick Lipton.

by Suresh Venkatasubramanian ( at January 29, 2015 08:11 AM


How can I gather state information from a set of actors using only the actorSystem?

I'm creating an actor system, which has a list of actors representing some kind of session state. These session are created by a factory actor (which might, in the future, get replaced by a router, if performance requires that - this should be transparent to the rest of the system, however). Now I want to implement an operation where I get some state information from each of my currently existing session actors. I have no explicit session list, as I want to rely on the actor system "owning" the sessions. I tried to use the actor system to look up the current session actors. The problem is that I did not find a "get all actor refs with this naming pattern" method. I tried to use the "/" operator on the system, followed by resolveOne - but got lost in a maze of future types.

The basic idea I had was: - Send a message to all current session actors (as given to my by my ActorSystem). - Wait for a response from them (preferably by using just the "ask" pattern - the method calling this broadcaster request/response is just a monitoring resp. debugging method, so blocking is no probleme here. - And then collect the responses into a result.

After a death match against Scala's type system I had to give up for now. Is there really no way of doing something like this?

by Wolfgang Liebich at January 29, 2015 08:10 AM


Lambda the Ultimate Forum

Negation in Logic Languages

This is a follow up from my earlier post (I will try and put a link here) about negation in Prolog. The question was if negation is necessary, and I now have some further thoughts (none of this is new, but new to me). I can now see my earlier approach to implementing 'member of' relies on evaluation order, which now I have moved to iterative-deepening no longer works. So I need a way of introducing negation that is logically sound. Negation as failure leads to unsoundness with variables. Negation as refutation seems to require three valued logic, and negation as inconsistency requires an additional set of 'negative' goals which complicates the use of negation. The most promising approach I have found is to convert negation into inequalities, and propagate 'not equal' as a constraint in the same way as CLP. This eliminates negation in goals in favour of equality and disequality. I am interested if anyone has any experience of treating negation in this way, or any further thoughts about negation in logic programming or logical frameworks.

January 29, 2015 08:00 AM


Planet Emacsen

Eric James Michael Ritz: Going Evil

After suffering much inner turmoil, I committed the greatest sin of any Emacs user. I am talking about true heresy. There are many great articles by long-time Vim users who made the switch to Emacs via Evil. But I am approaching Evil from the opposite direction. I have used Vim only sparingly, while I’ve used Emacs for a lot of years. Today I want to talk about why I decided to use Evil and share my initial impressions.

Chords, the Gateway Drug

For a while I’d been using Key Chord Mode by David Andersson. It allows me to define two-key chords which I use to simplify commonly used commands, e.g. qv runs vc-next-action, $$ runs ispell-buffer, qt runs tiny-expand, and so on. Over time I found myself defining more and more of these chords.

Then one day I had the thought, “You know what editor uses chords to great effect….”

Giving in to the Dark Side

Thus I decided to setup Evil, along with a bevy of related packages available at MELPA. It did not take long to adjust, e.g. using / to search instead of C-s, or C-f instead of C-v. My habit of using the aforementioned key-chords helped, but my extensive use of another program greatly eased the transition: Conkeror. The browser is almost entirely keyboard-driven and has a system of chords at its core. Most are in the format of object-verb. So for example, nf follows a link, nc copies a link, ic copies the URL for an image, et cetera. This concept of using individual keys to represent objects and actions on them is also at the heart of Vim. And even though Conkeror is inspired by Emacs, it feels more like Vim in that regard.

Initial Impressions

I have yet to encounter any snags or serious problems with Evil. There are times when I must resort to a cheat-sheet. And there are even times when I’ll use Emacs commands (using \ in Evil). But these ’crutches’ are disappearing quickly.

The most important question, however, is this: Do I feel more productive by using Evil? While I have no metrics available for support, I must answer with a resounding Yes. That is not to say my productivity has doubled or anything like that. But I am moving through files more quickly and editing, deleted, and re-arranging chunks of text faster than before, all of which has made me feel more productive in Emacs than before.


Give Evil a try. Just don’t tell your die-hard Emacs fans or you may suddenly find yourself ex-communicated.

by ericjmritz at January 29, 2015 07:42 AM


List filtering: list comprehension vs. lambda + filter

I happened to find myself having a basic filtering need: I have a list and I have to filter it by an attribute of the items.

My code looked like this:

my_list = [i for i in my_list if i.attribute == value]

But then i thought, wouldn't it be better to write it like this?

filter(lambda x: x.attribute == value, my_list)

It's more readable, and if needed for performance the lambda could be taken out to gain something.

Question is: are there any caveats in using the second way? Any performance difference? Am I missing the Pythonic Way™ entirely and should do it in yet another way (such as using itemgetter instead of the lambda)?

by Agos at January 29, 2015 07:38 AM


How can we make this function better?

Hey! I have this function which takes an array such as [5 12 22 3] and returns a sequence of [5 17 39 42], adding up the preceding terms one by one: 5, 5+12, 5+12+22, 5+12+22+3.

I'm looking for a way to do this without using a counter. Any ideas?

(let [array [5 12 22 3] counter (atom 0)] (for [i array] (do (swap! counter inc) (reduce + (first (split-at @counter array)))))) 

Bonus question: what type of sequence is this?

Because I don't know :)

Edit: as in how is it classified mathematically?

submitted by QuestionProgram
[link] [11 comments]

January 29, 2015 07:26 AM


Which are the most utilized programming languages in finance? [duplicate]

This question already has an answer here:

I would like to improve my knowledge of programming languages, in order to have a possible advantage in the future when applying for my first job. Since Finance interests me a lot, I would like to know from who is actually working in this specific field, which are the most utilized programming languages in finance. I have a medium knowledge of R, and a basic knowledge of VBA/Excel. I'm currently considering the option of improving VBA and start learning C++, is this the right path to follow or should I consider others programming languages? Let me know, and thank you in advance for your answers.

by BlackhawksNation at January 29, 2015 07:18 AM


How to update a mongo record using Rogue with MongoCaseClassField when case class contains a scala Enumeration

I am upgrading existing code from Rogue 1.1.8 to 2.0.0 and lift-mongodb-record from 2.4-M5 to 2.5.

I'm having difficulty writing MongoCaseClassField that contains a scala enum, that I really could use some help with.

For example,

object MyEnum extends Enumeration {
  type MyEnum = Value
  val A = Value(0)
  val B = Value(1)

case class MyCaseClass(name: String, value: MyEnum.MyEnum)

class MyMongo extends MongoRecord[MyMongo] with StringPk[MyMongo] {
  def meta = MyMongo

  class MongoCaseClassFieldWithMyEnum[OwnerType <: net.liftweb.record.Record[OwnerType], CaseType](rec : OwnerType)(implicit mf : Manifest[CaseType]) extends MongoCaseClassField[OwnerType, CaseType](rec)(mf) {
    override def formats = super.formats + new EnumSerializer(MyEnum)

  object myCaseClass extends MongoCaseClassFieldWithMyEnum[MyMongo, MyCaseClass](this)
  /// ...

When we try to write to this field, we get the error:

could not find implicit value for evidence parameter of type com.foursquare.rogue.BSONType[MyCaseClass]
   .and(_.myCaseClass  setTo myCaseClass)

We used to have this working in Rogue 1.1.8, by using our own version of the MongoCaseClassField, which made the #formats method overridable. But that feature was included into lift-mongodb-record in 2.5-RC6, so we thought this should just work now?

Many thanks in advance for any help, Juneyt

by Juneyt Donmez at January 29, 2015 07:16 AM

Split list into multiple lists with fixed number of elements

How to split a List of elements into lists with at most N items?

ex: Given a list with 7 elements, create groups of 4, leaving the last group possibly with less elements.


=> List(List(1,2,3,4), List(5,6,"seven"))

by Jhonny Everson at January 29, 2015 07:10 AM


2's complement addition with ZF/Carry/Overflow

Consider addition of two numbers when CPU uses $2's$ complement form: $$ 1\ 1\ 0\ 0\ 0\ 0\ 1\ 1\\0\ 1\ 0\ 0\ 1\ 1\ 0\ 0\\-------\\0\ 0\ 0\ 0\ 1\ 1\ 1\ 1\\------- $$

$$Carry\ = 1,\ Overflow = 0, \ Zero Flag = 0$$

My Reasoning:

  • Zero Flag isn't set because the result isn't zero
  • Overflow isn't set because Carry in = Carry out

However, the answer says that Zero Flag is set $(ZF = 1)$, Am I going wrong ?


  • Got ZF/Carry/Overflow info from Here

by SimpleGuy at January 29, 2015 07:07 AM


Does kernel have its own stack (not kernel thread)? And how to read the `vm_map` structure of kernel in FreeBSD?

I need to find all the kernel-owned memory regions under FreeBSD x86_64. One option is to traverse vm_map_entry and find the start_addr and end_addr as K0-K1, K2-K3, K4-K5, K7-K8.

As I noticed, there is no stack in these areas. I believe kernel has a very limited stack, but how to find its address?

Also, how to know which vm_map is kernel's. I.e., how to write a kernel module to read the information of kernel vm_map?

by WindChaser at January 29, 2015 06:54 AM

How can I ignore scala library while sbt assembly

I am using sbt to build my scala project.

This is my build.sbt:

name := "My Spark App"
version := "1.0"
libraryDependencies += "org.apache.spark" %% "spark-core" % "1.2.0" % "provided"
libraryDependencies += "org.apache.spark" %% "spark-sql" % "1.2.0" % "provided"

I am running sbt assembly to create an assembly jar, but I found a scala directory containing scala library class codes.

Is it possible to take scala library as a provided dependency, since the run-time environment already contains scala?

by Eric Zheng at January 29, 2015 06:53 AM


ADHD medication and Writing code. Code is my drug? I don't think this is normal.

So I am wondering what peoples experience with certain ADHD medication and programming and how they deal with it.

I recently started taking medication again to help me at work (Linux Administrator, Helped massively as I haven't taken anything since 2006 ish) and noticed in the past 3 weeks, writing code is literally crack to me, well at least what I imagine crack to be as I have literally never held an addiction before, not even caffeine.

Its like "Oh ********* I just need to write this or I might kill something" and anytime I have time between waiting on commands I work on a little side project to help me at work. If I have gone 4-5 hours without a chance to write code of some kind(Bash 1liners, Javascript utility I'm making, fancy awk stuff) I start going through withdrawals.

First day I started taking it I put like 3000 lines of code into a personal project of mine over the coarse of 18 hours. Straight. Got up like twice to get food and do restroom stuff, This project ive been working on since 2007 on and off with the current code base being restarted in like 2011.

In the past 3 weeks Ive added like another 5000 lines of code to it. To put into prospective I haven't touched it since 2012 and it had a code base of like 8,000 lines.

I also spent about 5 hour and wrote and entire somewhat detailed and balanced mod for the game Factorio despite never doing anything with LUA before. Theres not much actual code to it given the nature of Factorio mods but still.

I'm not really worried about the dosage because outside my code addiction everything has normal after the first 2 days (Well better than normal, I can focus for more than 30 seconds and context switching isn't an issue anymore)

submitted by masshuku
[link] [7 comments]

January 29, 2015 06:47 AM


I've been having trouble installing clojure/lein on windows.

It's a tad embarrassing, but I've been trying to install clojure for a while on Windows, and no luck. Can anyone do a quick write up on how to effectively start working with lein on Windows. Thanks

submitted by ThisIsFakeAliasYe
[link] [6 comments]

January 29, 2015 06:35 AM


$25_r = 23_{10}$ solve for the base r

first of all this is a homework question and I don't want the solution. I just want a reference to how to solve similar questions like this. I believe it's explained my course textbook, "Computer Organization & Architecture: Themes and Variations" but I currently cannot afford the textbook.

Here's the question:

For each of the following numbers, state the base in use; that is, what are the values of r, s, and t?

a. $25_r = 23_{10}$

b. $1001_s = 19684_{10}$

c. $1011_t = 4931_{10}$

I recognize it's similar to solving an equation. I'm guessing I have to find r, s and t and they will be a specific base that matches the base ten number. I tried searching online for similar questions but I'm not sure what to search for so I have no clue where to start for solving these equations.

Any help would be appreciated.

by 167165 at January 29, 2015 06:14 AM


Clojure Performance, How to Type hint to r/map

Below, I have 2 functions computing the sum of squares of their arguments. The first one is nice and functional, but 20x slower than the second one. I presume that the r/map is not taking advantage of aget to retrieve elements from the double-array, whereas I'm explicitly doing this in function 2.

Is there any way I can further typehint or help r/map r/fold to perform faster?

(defn sum-of-squares
  "Given a vector v, compute the sum of the squares of elements."
  ^double [^doubles v]
  (r/fold + (r/map #(* % %) v)))

(defn sum-of-squares2
  "This is much faster than above.  Post to stack-overflow to see."
  ^double [^doubles v]
  (loop [val 0.0
         i (dec (alength v))]
    (if (neg? i)
      (let [x (aget v i)]
        (recur (+ val (* x x)) (dec i))))))

(def a (double-array (range 10)))
(quick-bench (sum-of-squares a))

800 ns

(quick-bench (sum-of-squares2 a))

40 ns

by Scott at January 29, 2015 06:05 AM



How to know the address range of kernel stack in user process and kernel thread?

I'd like to know the address range of kernel stack. For user-space process, we can use /proc/pid/maps to see the stack address range by keyword [stack], but do not know which one is kernel stack. For kernel thread, usually /proc/pid/maps is empty.

So how can I know the kernel stack address range, in user-space process and kernel thread for FreeBSD?


It seems that kernel allocates two pages (IA-32) for each kernel thread, how could we find the address of these two pages under x86_64? (Maybe x86_64 is a little different)

by WindChaser at January 29, 2015 05:47 AM

I need to limit my program not to accept any decimal values in my program

I am instructed that I have to reject any decimal and I need to re enter the number again.I tried this code but still it just goes to the whole process before acknowledging the error. Try the program and judge me :D here's my code:

using namespace std;

int getInt()
    int m=0;

    while (!(cin >> m))
        cout << "Please input a proper 'whole' number: " ;

    return (m);

int main()
    double x;
    int q,w,e,choice;
    cout<<"Welcome! This program will sort out the integers you will input!\nPlease input number of integers: ";
    int* inc= new int[q];
    int* dec= new int[q];
    for(int p=1;p<=q;++p)

    cout<<"Input integer number "<<p<<": ";
    x =getInt();

    while(e>0 && inc[e-1]>x)

    while(w>0 && dec[w-1]<x)

    cout<<"What order do you prefer? Input 1 for increasing and 2 if decreasing.\nChoice: ";
    while(choice<1 || choice>2)
        cout<<"Please input a correct choice! Try again!\nChoice: ";

    for(int i=0;i<q;++i)

    for(int i=1;i<=q;++i)

hoping for your help :)

by darrel pinga at January 29, 2015 05:33 AM

Node.js ZeroMQ not working

My node.js ZeroMQ client isn't working. Here's the server process:

zmq = require 'zmq'
sub = zmq.socket 'sub'
sub.on('message', (msg) ->
    console.log msg

And here's the client:

zmq = require 'zmq'
pub = zmq.socket 'pub'

pub.connect('tcp://HOST:3333', (e) ->
    console.error e

pub.on('error', (e) ->
        console.error e

pub.send('msg testing')

When I run this on the same machine as the server process I see the message get logged, as expected. When running it on a difference machine nothing gets logged though, so presumably the message doesn't make it through to the remote machine. None of the error callbacks log anything either.

I can successfully connect to port 3333 on the remote machine using nc though, so it's not a firewall or connectivity issue.

I've also written the following python program, which when run from the same remote machine that I've run the coffee script example from I do see the message logged on the server.

import zmq
ctx = zmq.Context.instance()
s = ctx.socket(zmq.PUB)
s.send_string('msg testing')

That leads me to believe that this is a node.js zmq library issue, but how can I further debug this?

by Ben Dowling at January 29, 2015 05:16 AM


Spacemacs like keybindings

Hi, I found the spacemacs package little bit too much for my use. So I started configuring my own setup. Now I have most of the addons I really like configured the way I like.

The last piece missing is that Idd like to have the same kind of keybinds that spacemacs have. SPC+f for file related operations and SPC+b for buffer related and so on.

How can I bind SPACE key like this?

submitted by Erakko
[link] [11 comments]

January 29, 2015 05:16 AM


ClassTag based pattern matching fails for primitives

I thought the following would be the most concise and correct form to collect elements of a collection which satisfy a given type:

def typeOnly[A](seq: Seq[Any])(implicit tag: reflect.ClassTag[A]): Seq[A] = 
  seq.collect {
    case tag(t) => t

But this only works for AnyRef types, not primitives:

typeOnly[String](List(1, 2.3, "foo"))  // ok. List(foo)
typeOnly[Double](List(1, 2.3, "foo"))  // fail. List()

Obviously the direct form works:

List(1, 2.3, "foo") collect { case d: Double => d }  // ok. List(2.3)

So there must be a (simple!) way to fix the above method.

by 0__ at January 29, 2015 05:10 AM

Issue Running Shapeless in REPL

Given the following SBT files:

$cat shapeless_sandbox/build.sbt
name := "shapeless sandbox"

scalaVersion := "2.11.5"

libraryDependencies ++= Seq(
  "com.chuusai" %% "shapeless" % "2.1.0-RC1"

resolvers ++= Seq(

// Fork JVM when `run`-ing SBT
fork in run := true

And the SBT version:

$cat shapeless_sandbox/project/

Running sbt, update, and then console, I can't run the examples from the Feature Overview.

scala> import poly._
<console>:7: error: not found: value poly
       import poly._

What am I missing?

by Kevin Meredith at January 29, 2015 05:06 AM

Slick 2 Convert (Column[A], Column[B]) to Column[(A,B)]

Using Slick 2, I am trying to generate a query with a tupled IN clause:

select * from my_table where (a, b) IN ((1, 87));


val seq: Seq[(Int, Long)]

val a: Column[Int]
val b: Column[Long]

I am trying to generate the query along the lines:

(a, b) inSetBind seq

This doesn't work as (a,b) is of type (Column[Int], Column[Long]) and not Column[(Int,Long)]. Is it possible to convert this? There used to be a ~ operator in Slick 1 that did something similar but that appears gone in version 2.

by triggerNZ at January 29, 2015 05:00 AM

Java io library: What is the difference between File.toString() and File.getPath()

... since it seems that both returns the same string - take a look at this Scala code:

scala> val f = new File("log.txt")
scala> f.getPath
// res6: String = log
scala> f.toString
// res7: String = log

by lolski at January 29, 2015 04:53 AM


what ontologies in computer science are used for?

Are ontologies in computer science used for defining meaning and relations between terms in a domain or they can be used for the logic of a robot actions( for example the set of actions a robot does and the physical rules and relations between them and also the strategies and tactics the robot can use like a human to solve different problems).

Can ontologies that are written in OWL language describe logical things? In a cognitive system that has some ontologies and a cognitive architecture what exactly is the role of ontologies?

Are they supposed to provide all the knowledge and logical rules for making an intelligence entity?

by Ali Nfr at January 29, 2015 04:45 AM

Wes Felter


Why is this ansible lineinfile command to check for a line in /etc/sudoers failing when a very similar one is succeeding?

I have Kodi running on a Raspberry Pi, for which I'm writing an Ansible playbook. This playbook includes two tasks that check that a line is present in /etc/sudoers, with one passing consistently but the other failing consistently. I can't seem to pinpoint the reason why; the syntax of the two tasks is exactly the same, and both lines are definitely in the /etc/sudoers file. I've included the relevant code below, any input would be highly appreciated.


# /etc/sudoers
# This file MUST be edited with the 'visudo' command as root.
# See the man page for details on how to write a sudoers file.

Defaults        env_reset

# Host alias specification

# User alias specification

# Cmnd alias specification
Cmnd_Alias      SHUTDOWN = /sbin/shutdown, /sbin/reboot, /sbin/halt, /usr/bin/passwd
Cmnd_Alias      PERMISSIONS = /bin/chmod, /bin/chown
# User privilege specification
root    ALL=(ALL) ALL
debian-transmission     ALL=(ALL) NOPASSWD: PERMISSIONS
Defaults env_keep += "RPI_UPDATE_UNSUPPORTED"
# Allow members of group sudo to execute any command
# (Note that later entries override this, so you might need to move
# it further down)
%sudo ALL=(ALL) ALL
#includedir /etc/sudoers.d

Relevant snippet from the playbook tasks:

- name: set pi permissions in /etc/sudoers                                      
  lineinfile: "dest=/etc/sudoers                                                
              line='pi      ALL=(ALL) NOPASSWD: ALL'                            
              validate='visudo -cf %s'"                                         

- name: set debian-transmission permissions in /etc/sudoers                     
  lineinfile: "dest=/etc/sudoers                                                
              line='debian-transmission     ALL=(ALL) NOPASSWD: PERMISSIONS'    
              validate='visudo -cf %s'"                                         

(I'm aware that the first task is unnecessary since that is the system default, but I added it while trying to figure out why the other task wasn't working just to prove a point.)

And here's the output when I run the playbook:

TASK: [kodi | start transmission-daemon again once settings.json has been copied] *** 
changed: [kodi]

TASK: [kodi | set pi permissions in /etc/sudoers] ***************************** 
ok: [kodi]

TASK: [kodi | set debian-transmission permissions in /etc/sudoers] ************ 
failed: [kodi] => {"cmd": "visudo -cf /tmp/tmpZNRBC3", "failed": true, "rc": 2}
msg: [Errno 2] No such file or directory

FATAL: all hosts have already failed -- aborting

by hobbyte at January 29, 2015 04:18 AM

How do I get cljx to compile clojure files into classes during uberjar?

I'm using cljx to build cli / cljs applications but haven't for the life of me been able to create a self packaged compiled jar.

Here's my project file ->

When I run lein jar or lein uberjar I get only a packaged jar with the clj files, not the class files. See the output file above ^^.

Does anyone know how to make sure the transpiled clj code gets compiled into class files?

by chris.wood at January 29, 2015 04:12 AM


ido-find-file contextual behavior

Edit: Solved

Hey all. This is driving me bonkers, has been for a looooooong time. =)

When I C-x C-f while the point is on a line that looks like a file path, the starting location seems to be relative to the path at point. I know this might be helpful in certain cases, but I'd like to either disable it completely or know if there's a way to restart the search relative to the directory of the current buffer. I wouldn't mind an optional find-file-relative-to-file-at-point command, but defaulting to this behavior doesn't fit my workflow.

This is especially annoying when editing shell scripts or html.

I read through the doc string of C-h f ido-find-file but couldn't find a way to customize or jump immediately back to the directory of the buffer where I entered the function from.

Any ideas? Thanks in advance!

submitted by jhirn
[link] [2 comments]

January 29, 2015 03:54 AM


Calculating the Difference Between Two Java Date Instances

I'm using Java's class in Scala and want to compare a date object and the current time. I know I can calculate the delta by using getTime():

(new java.util.Date()).getTime() - oldDate.getTime()

However, this just leaves me with a Long representing milliseconds. Is there any simpler, nicer way to get a time delta?

by pr1001 at January 29, 2015 03:43 AM

Convert a Scala list to a tuple?

How can I convert a list with (say) 3 elements into a tuple of size 3?

For example, let's say I have val x = List(1, 2, 3) and I want to convert this into (1, 2, 3). How can I do this?

by grautur at January 29, 2015 03:38 AM

How to Configure CIDER repl?

The markdown for CIDER about configuring the CIDER repl starts off by saying:

You can certainly use CIDER without configuring it any further, but here are some ways other folks are adjusting their CIDER experience.

And then list several expressions like (setq nrepl-log-messages t). But where do those expressions need to be written to??

by dirtymikeandtheboys at January 29, 2015 03:31 AM

How to find Scala method

Using the Scala API : is there a way to find the object or class associated with a method ?

So if I just know that there exists a method "println" is there a quick way to navigate to the details of this method at API location ? (other than searching the Scala source)

by blue-sky at January 29, 2015 03:30 AM


Which interest rates to use for options pricing?

I am looking at the historical treasury interest rates and am uncertain which rates would be best to use for options pricing.

Should I use 1 month, 6 month, 2 year?


by Jakobovski at January 29, 2015 03:28 AM


Understanding `Monomorphism` Example of Shapeless

The Shapeless Features Overview shows the following example:

import poly._

// choose is a function from Sets to Options with no type specific cases
object choose extends (Set ~> Option) {
  def apply[T](s : Set[T]) = s.headOption

scala> choose(Set(1, 2, 3))
res0: Option[Int] = Some(1)

scala> choose(Set('a', 'b', 'c'))
res1: Option[Char] = Some(a)

However, I don't, in m lack of experience with Shapeless, understand the difference between that and the following:

scala> def f[T](set: Set[T]): Option[T] = set.headOption
f: [T](set: Set[T])Option[T]

scala> f( Set(1,2,3) )
res0: Option[Int] = Some(1)

scala> f( Set('a', 'b', 'c') )
res1: Option[Char] = Some(a)

by Kevin Meredith at January 29, 2015 03:20 AM



Is there a way to expand the scope of an existential type quantifier in Scala to convince the type checker that two variables have the same type?

Consider the following code snippet:

case class Foo[A](a:A)
case class Bar[A](a:A)

def f[B](foo:Foo[Seq[B]], bar:Bar[Seq[B]]) = foo.a ++ bar.a

val s : Seq[T] forSome {type T} = Seq(1, 2, 3)

f(Foo(s), Bar(s))

The last line fails to type check, because Foo(s) has type Foo[Seq[T]] forSome {type T} and Bar(s) has type Bar[Seq[T]] forSome {type T}, i.e. each has its own existential quantifier.

Is there any way around this? In reality all I know about s at compile time is that it has such an existential type. How can I force Foo(s) and Bar(s) to fall under the scope of a single existential quantifier?

Does this make sense? I'm pretty new to Scala and fancy types in general.

by iansimon at January 29, 2015 03:18 AM



ansible: escaping \" string

I'm trying to add a line to file using lineinfile that contains the string \".

let's say: hello \" world \"

The problem is that string is the way to escape only the " character.

any ideas?

EDIT: I tried \\\" and \\\\" with no luck

by user1692261 at January 29, 2015 03:00 AM

DragonFly BSD Digest

NFS and alc(4) improvements

If you have very recent alc(4) hardware, it may be supported now.  If you are booting over NFS, it may be faster now.  These changes are unrelated other than both being recent – NFS is improved for any chipset.

by Justin Sherrill at January 29, 2015 02:42 AM


What constitutes an "odd lot" in corporate bonds trades?

This is important in price discovery and pricing of bonds based on trades. "Odd" lots are traded at lower prices than "round" lots. However I wasn't able to find a definition of "odd" lot anywhere. For equity shares it's lots of multiples other than 100. I found one by fidelity odd lot = number of bonds different from 100 and one by capital IQ, trades below $1 million.

by PBD10017 at January 29, 2015 02:26 AM


GNU or Melpa version of yasnippet both in M-x package-list-packages which one am i suppose to get

The gnu version is 0.8.0 and melpa is 20141223.303, everything else is the same. What is the difference? (I'm curious because I may run into this again)

submitted by workisnotfun
[link] [6 comments]

January 29, 2015 02:23 AM


A* graph search heuristicfor pathfinding

A* needs a consistent heuristic to work on a graph.

So I'm not sure if the heuristic of a straight line (bird flight) can be used.

For example: the costs to travel to a neighbors node is always positive.


 |                  |
 |      START       |
 |                  |    where to stripes are obstacles.

Am I correct that this the proposed heuristic isn't consistent here as it has to travel away from the goal first?

Is there a good heuristic for this kind of situation? Or should I keep it with Dijkstra a forget about A*?

by Milan at January 29, 2015 02:16 AM


Good book for programming language concepts

Can someone recommend a good book to read about programming language design concepts (not the compiler design). Like to read more on lazy , imperative, declarative, functional programming their differences, lazy evaluation,duck typing,currying, lambda functions,..etc ? .



by Tharanga Abeyseela at January 29, 2015 02:10 AM





Is there any command such as versions:display-dependency-updates of maven on Activator (PlayFramework)

Is there on activator any command that can list to me all the dependencies and libraries that I use with the current version and newer versions that is possible to use to update my project?

This is similar that the apache maven command versions:display-dependency-updates.

Is there any similar command available?

by endrigoantonini at January 29, 2015 01:52 AM

Ansible 1.8.2 nested loop, trying to output item[1] in item[0] string, but its being escaped

Here is my example, not sure if this can be done, but I would like to output the value from the sites array, specifically item[1].site, in item[0].dest, but it looks like it is escaping {{ item[1].site }}, to {# item[1].site #}. Is there a way to prevent it from escaping the string?

- name: Put files into docker directory
  template: src={{ item[0].src }}  dest={{ item[0].src }}
    - [
        { src: 'Dockerfile.j2', dest: "/opt/docker-apache2-fpm/{{ item[1].site }}/Dockerfile" },
    - sites

Here is the output:

failed: [] => (item=[{'dest': u'/opt/docker-apache2-fpm/{# item[1].site #}/Dockerfile', 'src': 'Dockerfile.j2'}, {'site': '', 'user': 'mysite', 'uid': 11004}]) => {"failed": true, "item": [{"dest": "/opt/docker-apache2-fpm/{# item[1].site #}/Dockerfile", "src": "Dockerfile.j2"}, {"site": "", "uid": 11004, "user": "mysite"}]}
msg: Destination directory  does not exist

by Trololololol at January 29, 2015 01:46 AM

Ansible to Conditionally Prompt for a Variable?

I would like to be able to prompt for my super secure password variable if it is not already in the environment variables. (I'm thinking that I might not want to put the definition into .bash_profile or one of the other spots.)

This is not working. It always prompts me.

  THISUSER: "{{ lookup('env','LOGNAME') }}"
  SSHPWD:   "{{ lookup('env','MY_PWD') }}"

  - name: "release_version"
    prompt: "Product release version"
    default: "1.0"
    when: SSHPWD == null

Thoughts? thank you! ps I'm on a mac, but I'd love for it to be platform-independent.

by AnneTheAgile at January 29, 2015 01:42 AM

Planet Theory

New Bounds on Optimal Sorting Networks

Authors: Thorsten Ehlers, Mike Müller
Download: PDF
Abstract: We present new parallel sorting networks for $17$ to $20$ inputs. For $17, 19,$ and $20$ inputs these new networks are faster (i.e., they require less computation steps) than the previously known best networks. Therefore, we improve upon the known upper bounds for minimal depth sorting networks on $17, 19,$ and $20$ channels. Furthermore, we show that our sorting network for $17$ inputs is optimal in the sense that no sorting network using less layers exists. This solves the main open problem of [D. Bundala & J. Za\'vodn\'y. Optimal sorting networks, Proc. LATA 2014].

January 29, 2015 01:41 AM

Kangaroo Methods for Solving the Interval Discrete Logarithm Problem

Authors: Alex Fowler, Steven Galbraith
Download: PDF
Abstract: The interval discrete logarithm problem is defined as follows: Given some $g,h$ in a group $G$, and some $N \in \mathbb{N}$ such that $g^z=h$ for some $z$ where $0 \leq z < N$, find $z$. At the moment, kangaroo methods are the best low memory algorithm to solve the interval discrete logarithm problem. The fastest non parallelised kangaroo methods to solve this problem are the three kangaroo method, and the four kangaroo method. These respectively have expected average running times of $\big(1.818+o(1)\big)\sqrt{N}$, and $\big(1.714 + o(1)\big)\sqrt{N}$ group operations. It is currently an open question as to whether it is possible to improve kangaroo methods by using more than four kangaroos. Before this dissertation, the fastest kangaroo method that used more than four kangaroos required at least $2\sqrt{N}$ group operations to solve the interval discrete logarithm problem. In this thesis, I improve the running time of methods that use more than four kangaroos significantly, and almost beat the fastest kangaroo algorithm, by presenting a seven kangaroo method with an expected average running time of $\big(1.7195 + o(1)\big)\sqrt{N} \pm O(1)$ group operations. The question, 'Are five kangaroos worse than three?' is also answered in this thesis, as I propose a five kangaroo algorithm that requires on average $\big(1.737+o(1)\big)\sqrt{N}$ group operations to solve the interval discrete logarithm problem.

January 29, 2015 01:41 AM

Planarity of Streamed Graphs

Authors: Giordano Da Lozzo, Ignaz Rutter
Download: PDF
Abstract: In this paper we introduce a notion of planarity for graphs that are presented in a streaming fashion. A $\textit{streamed graph}$ is a stream of edges $e_1,e_2,...,e_m$ on a vertex set $V$. A streamed graph is $\omega$-$\textit{stream planar}$ with respect to a positive integer window size $\omega$ if there exists a sequence of planar topological drawings $\Gamma_i$ of the graphs $G_i=(V,\{e_j \mid i\leq j < i+\omega\})$ such that the common graph $G^{i}_\cap=G_i\cap G_{i+1}$ is drawn the same in $\Gamma_i$ and in $\Gamma_{i+1}$, for $1\leq i < m-\omega$. The $\textit{Stream Planarity}$ Problem with window size $\omega$ asks whether a given streamed graph is $\omega$-stream planar. We also consider a generalization, where there is an additional $\textit{backbone graph}$ whose edges have to be present during each time step. These problems are related to several well-studied planarity problems.

We show that the $\textit{Stream Planarity}$ Problem is NP-complete even when the window size is a constant and that the variant with a backbone graph is NP-complete for all $\omega \ge 2$. On the positive side, we provide $O(n+\omega{}m)$-time algorithms for (i) the case $\omega = 1$ and (ii) all values of $\omega$ provided the backbone graph consists of one $2$-connected component plus isolated vertices and no stream edge connects two isolated vertices. Our results improve on the Hanani-Tutte-style $O((nm)^3)$-time algorithm proposed by Schaefer [GD'14] for $\omega=1$.

January 29, 2015 01:41 AM

The Logic of Counting Query Answers: A Study via Existential Positive Queries

Authors: Hubie Chen, Stefan Mengel
Download: PDF
Abstract: We consider the computational complexity of counting the number of answers to a logical formula on a finite structure. In the setting of parameterized complexity, we present a trichotomy theorem on classes of existential positive queries. We then proceed to study an extension of first-order logic in which algorithms for the counting problem at hand can be naturally and conveniently expressed.

January 29, 2015 01:41 AM

Bernays-Sch\"onfinkel-Ramsey with Simple Bounds is NEXPTIME-complete

Authors: Marco Voigt, Christoph Weidenbach
Download: PDF
Abstract: Linear arithmetic extended with free predicate symbols is undecidable, in general. We show that the restriction of linear arithmetic inequations to simple bounds extended with the Bernays-Sch\"onfinkel-Ramsey free first-order fragment is decidable and NEXPTIME-complete. The result is almost tight because the Bernays-Sch\"onfinkel-Ramsey fragment is undecidable in combination with linear difference inequations, simple additive inequations, quotient inequations and multiplicative inequations.

January 29, 2015 01:40 AM

A Lower Bound on the Average-Case Complexity of Shellsort. (arXiv:cs/9906008v2 [cs.CC] UPDATED)

We prove a general lower bound on the average-case complexity of Shellsort: the average number of data-movements (and comparisons) made by a $p$-pass Shellsort for any incremental sequence is $\Omega (pn^{1 + 1/p})$ for every $p$. The proof method is an incompressibility argument based on Kolmogorov complexity. Using similar techniques, the average-case complexity of several other sorting algorithms is analyzed.

by <a href="">Tao Jiang</a> (McMaster U.), <a href="">Ming Li</a> (U. Waterloo), <a href="">Paul Vitanyi</a> (CWI &amp; U. Amsterdam) at January 29, 2015 01:30 AM

Monadic Second-Order Logic and Bisimulation Invariance for Coalgebras. (arXiv:1501.07215v1 [cs.LO])

Generalizing standard monadic second-order logic for Kripke models, we introduce monadic second-order logic interpreted over coalgebras for an arbitrary set functor. Similar to well-known results for monadic second-order logic over trees, we provide a translation of this logic into a class of automata, relative to the class of coalgebras that admit a tree-like supporting Kripke frame. We then consider invariance under behavioral equivalence of formulas; more in particular, we investigate whether the coalgebraic mu-calculus is the bisimulation-invariant fragment of monadic second-order logic. Building on recent results by the third author we show that in order to provide such a coalgebraic generalization of the Janin-Walukiewicz Theorem, it suffices to find what we call an adequate uniform construction for the functor. As applications of this result we obtain a partly new proof of the Janin-Walukiewicz Theorem, and bisimulation invariance results for the bag functor (graded modal logic) and all exponential polynomial functors.

Finally, we consider in some detail the monotone neighborhood functor, which provides coalgebraic semantics for monotone modal logic. It turns out that there is no adequate uniform construction for this functor, whence the automata-theoretic approach towards bisimulation invariance does not apply directly. This problem can be overcome if we consider global bisimulations between neighborhood models: one of our main technical results provides a characterization of the monotone modal mu-calculus extended with the global modalities, as the fragment of monadic second-order logic for the monotone neighborhood functor that is invariant for global bisimulations.

by <a href="">Sebastian Enqvist</a>, <a href="">Fatemeh Seifan</a>, <a href="">Yde Venema</a> at January 29, 2015 01:30 AM

Everyday the Same Picture: Popularity and Content Diversity. (arXiv:1501.07201v1 [cs.SI])

Facebook is flooded by diverse and heterogeneous content, from kittens up to music and news, passing through satirical and funny stories. Each piece of that corpus reflects the heterogeneity of the underlying social background. In the Italian Facebook we have found an interesting case: a page having more than $40K$ followers that every day posts the same picture of Toto Cutugno, a popular Italian singer. In this work, we use such a page as a benchmark to study and model the effects of content heterogeneity on popularity. In particular, we use that page for a comparative analysis of information consumption patterns with respect to pages posting science and conspiracy news. In total, we analyze about $2M$ likes and $190K$ comments, made by approximately $340K$ and $65K$ users, respectively. We conclude the paper by introducing a model mimicking users selection preferences accounting for the heterogeneity of contents.

by <a href="">Alessandro Bessi</a>, <a href="">Fabiana Zollo</a>, <a href="">Michela Del Vicario</a>, <a href="">Antonio Scala</a>, <a href="">Guido Caldarelli</a>, <a href="">Fabio Petroni</a>, <a href="">Bruno Gon&#xe7;alves</a>, <a href="">Walter Quattrociocchi</a> at January 29, 2015 01:30 AM

One Size Does not Fit All: When to Use Signature-based Pruning to Improve Template Matching for RDF graphs. (arXiv:1501.07184v1 [cs.DB])

Signature-based pruning is broadly accepted as an effective way to improve query performance of graph template matching on general labeled graphs. Most existing techniques which utilize signature-based pruning claim its benefits on all datasets and queries. However, the effectiveness of signature-based pruning varies greatly among different RDF datasets and highly related with their dataset characteristics. We observe that the performance benefits from signature-based pruning depend not only on the size of the RDF graphs, but also the underlying graph structure and the complexity of queries. This motivates us to propose a flexible RDF querying framework, called RDF-h, which selectively utilizes signature-based pruning by evaluating the characteristics of RDF datasets and query templates. Scalability and efficiency of RDF-h is demonstrated in experimental results using both real and synthetic datasets.

Keywords: RDF, Graph Template Matching, Signature-based Pruning

by <a href="">Shi Qiao</a>, <a href="">Z. Meral Ozsoyoglu</a> at January 29, 2015 01:30 AM

A Data Annotation Architecture for Semantic Applications in Virtualized Wireless Sensor Networks. (arXiv:1501.07139v1 [cs.NI])

Wireless Sensor Networks (WSNs) have become very popular and are being used in many application domains (e.g. smart cities, security, gaming and agriculture). Virtualized WSNs allow the same WSN to be shared by multiple applications. Semantic applications are situation-aware and can potentially play a critical role in virtualized WSNs. However, provisioning them in such settings remains a challenge. The key reason is that semantic applications provisioning mandates data annotation. Unfortunately it is no easy task to annotate data collected in virtualized WSNs. This paper proposes a data annotation architecture for semantic applications in virtualized heterogeneous WSNs. The architecture uses overlays as the cornerstone, and we have built a prototype in the cloud environment using Google App Engine. The early performance measurements are also presented.

by <a href="">Imran Khan</a>, <a href="">Rifat Jafrin</a>, <a href="">Fatima Zahra Errounda</a>, <a href="">Roch Glitho</a>, <a href="">Noel Crespi</a>, <a href="">Monique Morrow</a>, <a href="">Paul Polako</a> at January 29, 2015 01:30 AM

Wireless Sensor Network Virtualization: Early Architecture and Research Perspectives. (arXiv:1501.07135v1 [cs.NI])

Wireless sensor networks (WSNs) have become pervasive and are used in many applications and services. Usually the deployments of WSNs are task oriented and domain specific; thereby precluding re-use when other applications and services are contemplated. This inevitably leads to the proliferation of redundant WSN deployments. Virtualization is a technology that can aid in tackling this issue, as it enables the sharing of resources/infrastructure by multiple independent entities. In this paper we critically review the state of the art and propose a novel architecture for WSN virtualization. The proposed architecture has four layers (physical layer, virtual sensor layer, virtual sensor access layer and overlay layer) and relies on the constrained application protocol (CoAP). We illustrate its potential by using it in a scenario where a single WSN is shared by multiple applications; one of which is a fire monitoring application. We present the proof-of-concept prototype we have built along with the performance measurements, and discuss future research directions.

by <a href="">Imran Khan</a>, <a href="">Fatna Belqasmi</a>, <a href="">Roch Glitho</a>, <a href="">Noel Crespi</a>, <a href="">Monique Morrow</a>, <a href="">Paul Polakos</a> at January 29, 2015 01:30 AM

A Characterisation of Context-Sensitive Languages by Consensus Games. (arXiv:1501.07131v1 [cs.FL])

We propose a game for recognising formal languages, in which two players with imperfect information need to coordinate on a common decision, given private input information. The players have a joint objective to avoid an inadmissible decision, in spite of the uncertainty induced by the input.

We show that this model of consensus acceptor games characterises context-sensitive languages, and conversely, that winning strategies in such games can be described by context-sensitive languages. This implies that it is undecidable whether a consensus game admits a winning strategy, and, even if so, it is PSPACE-hard to execute one. On the positive side, we prove that whenever a winning strategy exists, there exists one that can be implemented by a linear bounded automaton.

by <a href="">Dietmar Berwanger</a>, <a href="">Marie van den Bogaard</a> at January 29, 2015 01:30 AM

k2U: A General Framework from k-Point Effective Schedulability Analysis to Utilization-Based Tests. (arXiv:1501.07084v1 [cs.OS])

To deal with a large variety of workloads in different application domains in real-time embedded systems, a number of expressive task models have been developed. For each individual task model, researchers tend to develop different types of techniques for schedulability tests with different computation complexity and performance. In this paper, we present a general schedulability analysis framework, namely the k2U framework, that can be potentially applied to analyze a large set of real- time task models under any fixed-priority scheduling algorithm, on both uniprocessors and multiprocessors. The key to k2U is a k-point effective schedulability test, which can be viewed as a blackbox interface to apply the k2U framework. For any task model, if a corresponding k-point effective schedulability test can be constructed, then a sufficient utilization-based test can be automatically derived. We show the generality of k2U by applying it to different task models, which results in new and better tests compared to the state-of-the-art.

by <a href="">Jian-Jia Chen</a>, <a href="">Wen-Hung Huang</a>, <a href="">Cong Liu</a> at January 29, 2015 01:30 AM

A Diagrammatic Axiomatisation for Qubit Entanglement. (arXiv:1501.07082v1 [cs.LO])

Diagrammatic techniques for reasoning about monoidal categories provide an intuitive understanding of the symmetries and connections of interacting computational processes. In the context of categorical quantum mechanics, Coecke and Kissinger suggested that two 3-qubit states, GHZ and W, may be used as the building blocks of a new graphical calculus, aimed at a diagrammatic classification of multipartite qubit entanglement that would highlight the communicational properties of quantum states, and their potential uses in cryptographic schemes.

In this paper, we present a full graphical axiomatisation of the relations between GHZ and W: the ZW calculus. This refines a version of the preexisting ZX calculus, while keeping its most desirable characteristics: undirectedness, a large degree of symmetry, and an algebraic underpinning. We prove that the ZW calculus is complete for the category of free abelian groups on a power of two generators - "qubits with integer coefficients" - and provide an explicit normalisation procedure.

by <a href="">Amar Hadzihasanovic</a> at January 29, 2015 01:30 AM

On the genetic optimization of APSK constellations for satellite broadcasting. (arXiv:1501.07080v1 [cs.IT])

Both satellite transmissions and DVB applications over satellite present peculiar characteristics that could be taken into consideration in order to further exploit the optimality of the transmission. In this paper, starting from the state-of-the-art, the optimization of the APSK constellation through asymmetric symbols arrangement is investigated for its use in satellite communications. In particular, the optimization problem is tackled by means of Genetic Algorithms that have already been demonstrated to work nicely with complex non-linear optimization problems like the one presented hereinafter. This work aims at studying the various parameters involved in the optimization routine in order to establish those that best fit this case, thus further enhancing the constellation.

by <a href="">Alessio Meloni</a>, <a href="">Maurizio Murroni</a> at January 29, 2015 01:30 AM

The Affinity Effects of Parallelized Libraries in Concurrent Environments. (arXiv:1501.07079v1 [cs.DC])

The use of cloud computing grows as it appears to be an additional resource for High-Performance Parallel and Distributed Computing (HPDC), especially with respect to its use in support of scientific applications. Many studies have been devoted to determining the effect of the virtualization layer on the performance, but most of the studies conducted so far lack insight into the joint effects between application type, virtualization layer and parallelized libraries in applications. This work introduces the concept of affinity with regard to the combined effects of the virtualization layer, class of application and parallelized libraries used in these applications. Affinity is here defined as the degree of influence that one application has on other applications when running concurrently in virtual environments hosted on the same real server. The results presented here show how parallel libraries used in application implementation have a significant influence and how the combinations between these types of libraries and classes of applications could significantly influence the performance of the environment. In this context, the concept of affinity is then used to evaluate these impacts to contribute to better stability and performance in the computational environment.

by <a href="">Fabio Licht</a>, <a href="">Bruno Schulze</a>, <a href="">Luis E. Bona</a>, <a href="">Antonio R. Mury</a> at January 29, 2015 01:30 AM

On the stability of asynchronous Random Access Schemes. (arXiv:1501.07072v1 [cs.IT])

Slotted Aloha-based Random Access (RA) techniques have recently regained attention in light of the use of Interference Cancellation (IC) as a mean to exploit diversity created through the transmission of multiple burst copies per packet content (CRDSA). Subsequently, the same concept has been extended to pure ALOHA-based techniques in order to boost the performance also in case of asynchronous RA schemes. In this paper, throughput as well as packet delay and related stability for asynchronous ALOHA techniques under geometrically distributed retransmissions are analyzed both in case of finite and infinite population size. Moreover, a comparison between pure ALOHA, its evolution (known as CRA) and CRDSA techniques is presented, in order to give a measure of the achievable gain that can be reached in a closed-loop scenario with respect to the previous state of the art.

by <a href="">Alessio Meloni</a>, <a href="">Maurizio Murroni</a> at January 29, 2015 01:30 AM

An Effective Framework for Managing University Data using a Cloud based Environment. (arXiv:1501.07056v1 [cs.DC])

Management of data in education sector particularly management of data for big universities with several employees, departments and students is a very challenging task. There are also problems such as lack of proper funds and manpower for management of such data in universities. Education sector can easily and effectively take advantage of cloud computing skills for management of data. It can enhance the learning experience as a whole and can add entirely new dimensions to the way in which education is imbibed. Several benefits of Cloud computing such as monetary benefits, environmental benefits and remote data access for management of data such as university database can be used in education sector. Therefore, in this paper we have proposed an effective framework for managing university data using a cloud based environment. We have also proposed cloud data management simulator: a new simulation framework which demonstrates the applicability of cloud in the current education sector. The framework consists of a cloud developed for processing a universities database which consists of staff and students. It has the following features (i) support for modeling cloud computing infrastructure, which includes data centers containing university database; (ii) a user friendly interface; (iii) flexibility to switch between the different types of users; and (iv) virtualized access to cloud data.

by <a href="">Kashish Ara Shakil</a>, <a href="">Shuchi Sethi</a>, <a href="">Mansaf Alam</a> at January 29, 2015 01:30 AM

Resource Usage Estimation of Data Stream Processing Workloads in Datacenter Clouds. (arXiv:1501.07020v1 [cs.DB])

Real-time computation of data streams over affordable virtualized infrastructure resources is an important form of data in motion processing architecture. However, processing such data streams while ensuring strict guarantees on quality of services is problematic due to: (i) uncertain stream arrival pattern; (ii) need of processing different types of continuous queries; and (iii) variable resource consumption behavior of continuous queries. Recent work has explored the use of statistical techniques for resource estimation of SQL queries and OLTP workloads. All these techniques approximate resource usage for each query as a single point value. However, in data stream processing workloads in which data flows through the graph of operators endlessly and poses performance and resource demand fluctuations, the single point resource estimation is inadequate. Because it is neither expressive enough nor does it capture the multi-modal nature of the target data. To this end, we present a novel technique which uses mixture density networks, a combined structure of neural networks and mixture models, to estimate the whole spectrum of resource usage as probability density functions. The proposed approach is a flexible and convenient means of modeling unknown distribution models. We have validated the models using both the linear road benchmark and the TPC-H, observing high accuracy under a number of error metrics: mean-square error, continuous ranked probability score, and negative log predictive density.

by <a href="">Alireza Khoshkbarforoushha</a>, <a href="">Rajiv Ranjan</a>, <a href="">Raj Gaire</a>, <a href="">Prem P. Jayaraman</a>, <a href="">John Hosking</a>, <a href="">Ehsan Abbasnejad</a> at January 29, 2015 01:30 AM

Heterogeneous Cellular Networks Using Wireless Backhaul: Fast Admission Control and Large System Analysis. (arXiv:1501.06988v1 [cs.IT])

We consider a heterogeneous cellular network with densely underlaid small cell access points (SAPs). Wireless backhaul provides the data connection from the core network to SAPs. To serve as many SAPs as possible with guaranteed data rates, admission control of SAPs needs to be performed in wireless backhaul. Such a problem involves joint design of transmit beamformers, power control, and selection of SAPs. In order to tackle such a difficult problem, we apply $\ell_1$-relaxation and propose an iterative algorithm for the $\ell_1$-relaxed problem. The selection of SAPs is made based on the outputs of the iterative algorithm. This algorithm is fast and enjoys low complexity for small-to-medium sized systems. However, its solution depends on the actual channel state information, and resuming the algorithm for each new channel realization may be unrealistic for large systems. Therefore, we make use of random matrix theory and also propose an iterative algorithm for large systems. Such a large system iterative algorithm produces asymptotically optimum solution for the $\ell_1$-relaxed problem, which only requires large-scale channel coefficients irrespective of the actual channel realization. Near optimum results are achieved by our proposed algorithms in simulations.

by <a href="">Jian Zhao</a>, <a href="">Tony Q. S. Quek</a>, <a href="">Zhongding Lei</a> at January 29, 2015 01:30 AM

Learning Analytics: A Survey. (arXiv:1501.06964v1 [cs.DB])

Learning analytics is a research topic that is gaining increasing popularity in recent time. It analyzes the learning data available in order to make aware or improvise the process itself and/or the outcome such as student performance. In this survey paper, we look at the recent research work that has been conducted around learning analytics, framework and integrated models, and application of various models and data mining techniques to identify students at risk and to predict student performance.

by <a href="">Usha Keshavamurthy</a>, <a href="">H. S. Guruprasad</a> at January 29, 2015 01:30 AM

Stability Analysis of Slotted Aloha with Opportunistic RF Energy Harvesting. (arXiv:1501.06954v1 [cs.NI])

Energy harvesting (EH) is a promising technology for realizing energy efficient wireless networks. In this paper, we utilize the ambient RF energy, particularly interference from neighboring transmissions, to replenish the batteries of the EH enabled nodes. However, RF energy harvesting imposes new challenges into the design of wireless networks. In this work, we investigate the performance of a slotted Aloha random access wireless network consisting of two types of nodes, namely type I which has unlimited energy supply and type II which is solely powered by an RF energy harvesting circuit. The transmissions of a type I node are recycled by a type II node to replenish its battery. Our contribution in this paper is multi-fold. First, we generalize the stochastic dominance technique for analyzing RF EH-networks. Second, we characterize an outer bound on the stable throughput region of RF EH-networks under the half-duplex and full-duplex energy harvesting paradigms. Third, we investigate the impact of finite capacity batteries on the stable throughput region. Finally, we derive the closure of the outer bound over all transmission probability vectors.

by <a href="">Abdelrahman M.Ibrahim</a>, <a href="">Ozgur Ercetin</a>, <a href="">Tamer ElBatt</a> at January 29, 2015 01:30 AM



:repl-options do not take effect when used with :main in project.clj

Below is my partial project.clj file

  :aot [almonds.runner]
  :main almonds.runner
  :profiles {:dev
             {:dependencies [[org.clojure/tools.namespace "0.2.4"]]
              :repl-options {:init-ns user
                             :init (refresh)}
              :source-paths ["dev"]}}

I am using cider with emacs. When I run cider-jack-in the repl starts in the almonds.main ns instead of the user ns. How do I make it start in the user ns and also run the refresh fn ?

by murtaza52 at January 29, 2015 01:02 AM