Planet Primates

September 18, 2014

Portland Pattern Repository

StackOverflow

How to search and replace in a Clojure script data structure?

I would like to have a search and replace on the values only inside data structures:

(def str [1 2 3 
              {:a 1 
               :b 2 
               1  3}])

and

(subst  str  1  2) 

to return

[2 2 3 {:a 2, :b 2, 1 3}]

by Zubair at September 18, 2014 02:58 PM

/r/netsec

StackOverflow

Is an intermediate List eligible for Garbage Collection when toStream is used?

Let's say I want to create a List[(Int, Int)]:

scala> (0 to 3).toList.zip(0 to 3)
res3: List[(Int, Int)] = List((0,0), (1,1), (2,2), (3,3))

However, what if I wanted to create a Stream[(Int, Int)] instead:

scala> (0 to 3).toList.zip(0 to 3).toStream
res4: scala.collection.immutable.Stream[(Int, Int)] = Stream((0,0), ?)

Is the intermediate list that was used to build res4 eligible for garbage collection?

by Kevin Meredith at September 18, 2014 02:53 PM

Set default values in sparse nested map in clojure

I've got a set of default values for a map, and I'd like to be able to take any stored map that doesn't have the values and apply the defaults.

i.e. if I've got the following inputs

(def defaults {:config {:tablet {:urls [] :enable false}}})
(def stored   {:config {:tablet {         :enable true }}})

I'd like to be able to create the following result.

              {:config {:tablet {:urls [] :enable true}}}

So the stored values are used when they exist, but the defaults are used when that key doesn't exist. I've tried merge, merge-with merge, merge-with concat, merge-with conj, and a few other incantations but none seem quite to work. One that does work is, if you know the max depth of nesting, (merge-with (partial merge-with ... (partial merge-with merge) ... )) but that's pretty hacky. Seems like there should be a simpler solution since this seems like it would be not-uncommon in Clojuresque code.

by Dax Fohl at September 18, 2014 02:53 PM

/r/compsci

CompsciOverflow

Finding an exactly weighted st-path in a digraph

I have a weighted digraph graph $G = (V,E)$ where the weights are positive and negative integers. The graph $G$ is not necessarily acyclic.

The question is: given 2 nodes $v_1$ and $v_2$, is there a path from $v_1$ to $v_2$ with a weight $w$.

I would like to know if there are any known complexity results for this problem, or even anything related to this but with a specific weight, and not just shortest path, longest path etc. I have been thinking of representing the problem as in-equations in some type of linear programming, but before I start I'd like to get as much info as possible :)

by Tyler Durden at September 18, 2014 02:43 PM

/r/scala

StackOverflow

What if a remote environment variable doesn't exist? Ansible take this as an fatal error. How to avoid that?

I want to do some check using remote environment variables, which can be read from format like this

{{ ansible_env.NGINX_HOME }}

This "path" or environment variable can be absent, and that's the purpose of that check anyway.

But Ansible treat this as a fatal error, showing error message like

One or more undefined variables: 'dict object' has no attribute 'NGINX_HOME'

What can I do to just skip this?

by Zhenkai at September 18, 2014 02:36 PM

Scala inference on _

I'm a newbie to scala(fxnl programming), though I put _ in context in a few places like the below

list.map(_ * 1) 

I couldn't completely understand this statement

val story = (catch _) andThen (eat _)

though I can infer from the calling

story(new Cat, new Bird)

that underscore serves as placeholders to the the argument positions, I want to understand the concept behind it.

by Somasundaram Sekar at September 18, 2014 02:36 PM

QuantOverflow

Does the correlation of matrices has explanatory power when building a pattern recognition model?

I'm using 8 different variables (with daily observations) with the purpose to compare different months across the historical data. For that purpose I calculate the correlation between each month and the historical months in the data and then calculate the Euclidean distance in order to find the closer month.

Does it make sense? Is there any literature regarding such experiments?

by goncalogc at September 18, 2014 02:31 PM

Dave Winer

TV programming idea

About six million people will receive new iPhones tomorrow.

What will we all be doing, on Friday night?

Unboxing of course! And swapping out the SIM from our "old" phone into our new phone. And hopefully having it work.

Backing up the old phone, and restoring it into the new phone.

It seems that there is an opportunity for one of the news channels to have some kind of news show with the experiences the six million people, all around the world, will be sharing at the same time.

September 18, 2014 02:31 PM

TheoryOverflow

Finding an exactly weighted st-path in a digraph

I have a weighted digraph graph $G = (V,E)$ where the weights are positive and negative integers. The graph $G$ is not necessarily acyclic.

The question is: given 2 nodes $v_1$ and $v_2$, is there a path from $v_1$ to $v_2$ with a weight $w$.

I would like to know if there are any known complexity results for this problem, or even anything related to this but with a specific weight, and not just shortest path, longest path etc. I have been thinking of representing the problem as in-equations in some type of linear programming, but before I start I'd like to get as much info as possible :)

by Tyler Durden at September 18, 2014 02:28 PM

QuantOverflow

what is a typical way forex brokerages can provide cheap leverage for their customers?

I'm not very well read in the area of high finance but I'm curious how forex brokerages are able to provide the backing for leverage that they can provide to customers.

Is it possible to do this without charging interest, only making the return on the spread against the rates they can get?

Are there standard algorithms that can be used to this end?

by barrymac at September 18, 2014 02:27 PM

StackOverflow

Apache Spark - How to zip multiple RDDs

Let's say I have a bunch of RDD's, maybe RDD[Int], and I have a function defined that defines operation on a sequence of ints and returns an int, kind of like a fold or something: f: Seq[Int] => Int. Lets say that it just summarizes the sequence.

If I have a sequence of RDD's, Seq[RDD[Int]], how do I apply the function and return a single new RDD with the resulting value? I don't seem to find a zipPartitions method in Spark which accomplishes this.

Cheers,

Johan

by Johan S at September 18, 2014 02:24 PM

CompsciOverflow

Iterative permutations, but favoring swaps of early elements

I'm familiar with the Steinhaus–Johnson–Trotter algorithm, which allows for the iterative yielding of permutations of a sequence by performing a single swap per iteration. It has the behavior, however, that it tends to swap elements further along the sequence early in the iteration. For 4 elements, my understanding is that the first 6 permutations will be:

$1,2,3,4$

$1,2,4,3$

$1,4,2,3$

$4,1,2,3$

$4,1,3,2$

$4,3,1,2$

What I'd like to know is whether there is similar iterative algorithm, using relatively simple changes (though obviously can't be single swaps) that yields a sequence:

$1,2,3,4$

$2,1,3,4$

$1,3,2,4$

$2,3,1,4$

$3,1,2,4$

$3,2,1,4$


Per request, here is my attempt at a better definition of the desired result:

  1. Start with the initial sequence of $n$ elements and yield it.
  2. For each $i \in \{2..n\}$, then $k \in \{1..i\}$
  3. Take the element $x_i$ and place it at position $i-k$. We will now consider the first $i$ elements and keep the remaining $n-i$ elements the same.
  4. Yield all permutations of the remaining $i-1$ elements (using these rules) surrounding the position $i-k$.

Basically the goal is to permute each initial sequence fully before continuing to permute with later elements.

by Dan Bryant at September 18, 2014 02:24 PM

/r/emacs

TheoryOverflow

Does 4NF imply BCNF?

Wiki says:

4NF is the next level of normalization after Boyce–Codd normal form,

but I can not see how the definition of 4NF implies BCNF.

Consider the relation with 3 attributes: Country, District and City. District determines Country and a pair (Country, City) also determines the Country, but a City itself does not determine anything. It is 3NF as all attributes belong to some key, there are no Cartesian products, so it looks like 4NF. But it is not BCNF, because Country is determined by District, which is not a superkey.

Maybe 4NF is BCNF by assumption? But my lecture-notes claim, that 4NF is 3NF + "nontrivial multivalued dependencies always come from superkeys".

P.S. If not appropriate here, please move to SO or somewhere.

by savick01 at September 18, 2014 02:12 PM

StackOverflow

Why is it not possible (in scala) to provide implementation for an abstract override method in the implementing base class

What I would like to do is this:

trait Addable[T]{
  def plus(x: T): T
}

trait AddableWithBounds[T] extends Addable[T] {
  abstract override def plus(x: T): T = limitToBounds(super.plus(x))

  def limitToBounds(x: T): T = ... //Some bounds checking
}

class Impl(num: Int) extends AddableWithBounds[Impl] {
  override def plus(x: Impl) = new Impl(num + x.num)
}

Reading through various posts it would seem that the reason this is not possible is that after class linearization the stacked trait is only looking in classes to its right for implementations of super.plus(x: T)

That is a technical argument though and I'm interested in the question: Is there a fundamental reason why the implementation could not be taken from the base class? Because as you can see in this example it is not possible to implement the plus method before knowing the actual data type that needs to be added but afaic implementing bounds checking in a trait seems reasonable.

Just as an idea: If the problem is that in this case it is unclear which plus is being overridden in the base class maybe a scope identifier like

def Addable[Impl].plus(x: Impl) = new Impl(num + x.num)

would help.

by Cornelius at September 18, 2014 02:04 PM

DataTau

StackOverflow

How do I iterate RDD's in apache spark (scala)

I use the following command to fill an RDD with a bunch of arrays containing 2 strings ["filename", "content"].

Now I want to iterate over every of those occurrences to do something with every filename and content.

val someRDD = sc.wholeTextFiles("hdfs://localhost:8020/user/cloudera/*")

I can't seem to find any documentation on how to do this however.

So what I want is this:

foreach occurrence-in-the-rdd{
   //do stuff with the array found on loccation n of the RDD
} 

by Havnar at September 18, 2014 02:00 PM

CompsciOverflow

Count points enclosed by several planes in 3D space

I have for example 10 planes with their equation: Ax + By + Cz = D and a list of 3D points. Those plane can make regions, some of them closed, and others not, the task is to count the number of points in each region.

I just need some ideas about how to solve, or if there is an algorithm for that.

by Danimar Ribeiro at September 18, 2014 01:59 PM

StackOverflow

ZMQDevice in PHP

I'm trying to use $listener IPC socket with ZMQDevice

    ZMQDevice::__construct ( ZMQSocket $frontend, ZMQSocket $backend, ZMQSocket $listener)

with

    $listener = new ZMQSocket($context, ZMQ::SOCKET_PUB);
    $listener->bind("ipc://witness.ipc");

and

    $receiver = new ZMQSocket($context, ZMQ::SOCKET_SUB);
    $receiver->setSockOpt(ZMQ::SOCKOPT_SUBSCRIBE, '');
    $receiver->connect("ipc://witness.ipc");

I receive nothing if using IPC socket, if I switch to TCP it works.

Any idea?

Thank you.

by link2caro at September 18, 2014 01:59 PM

OCaml: Insert integer into sorted list of integers

What is an efficient way in OCaml to write a function insert : int -> list -> list that will insert an int into a sorted list of int and return the new list?

This is what i have so far to append to the end of a list:

let insert x list =
 match list with
  [] -> [x]
  | list :: t -> list :: (insert t x)

by user3413252 at September 18, 2014 01:57 PM

/r/netsec

CompsciOverflow

How to prove correctness of a shuffle algorithm?

I have two ways of producing a list of items in a random order and would like to determine if they are equally fair (unbiased).

The first method I use is to construct the entire list of elements and then do a shuffle on it (say a Fisher-Yates shuffle). The second method is more of an iterative method which keeps the list shuffled at every insertion. In pseudo-code the insertion function is:

insert( list, item )
    list.append( item )
    swap( list.random_item, list.last_item )

I'm interested in how one goes about showing the fairness of this particular shuffling. The advantages of this algorithm, where it is used, are enough that even if slightly unfair it'd be okay. To decide I need a way to evaluate its fairness.

My first idea is that I need to calculate the total permutations possible this way versus the total permutations possible for a set of the final length. I'm a bit at a loss however on how to calculate the permutations resulting from this algorithm. I also can't be certain this is the best, or easiest approach.

by edA-qa mort-ora-y at September 18, 2014 01:55 PM

StackOverflow

How to combine multiple PNGs into one big PNG file?

I have approx. 6000 PNG files (256*256 pixels) and want to combine them into a big PNG holding all of them programmatically.

What's the best/fastest way to do that?

(The purpose is printing on paper, so using some web-technology is not an option and having one, single picture file will eliminate many usage errors.)

I tried fahd's suggestion but I get a NullPointerException when I try to create a BufferedImage with 24576 pixels wide and 15360 pixels high. Any ideas?

by soc at September 18, 2014 01:54 PM

QuantOverflow

In what kind of stochastic process Ito's lemma is adopted?

I have been told that Ito's lemma serves as the stochastic calculus counterpart of the chain rule. And yet again my tutor mentioned it is not used for all stochastic processes.

Is this statement true?

If so what are the conditions in which Ito's lemma is used?

by Omley at September 18, 2014 01:54 PM

StackOverflow

Scala split string and sort data

Hi I am new in scala and I achieved following things in scala, my string contain following data

CLASS: Win32_PerfFormattedData_PerfProc_Process$$(null)|CreatingProcessID|Description|ElapsedTime|Frequency_Object|Frequency_PerfTime|Frequency_Sys100NS|HandleCount|IDProcess|IODataBytesPersec|IODataOperationsPersec|IOOtherBytesPersec|IOOtherOperationsPersec|IOReadBytesPersec|IOReadOperationsPersec|IOWriteBytesPersec|IOWriteOperationsPersec|Name|PageFaultsPersec|PageFileBytes|PageFileBytesPeak|PercentPrivilegedTime|PercentProcessorTime|PercentUserTime|PoolNonpagedBytes|PoolPagedBytes|PriorityBase|PrivateBytes|ThreadCount|Timestamp_Object|Timestamp_PerfTime|Timestamp_Sys100NS|VirtualBytes|VirtualBytesPeak|WorkingSet|WorkingSetPeak|WorkingSetPrivate$$(null)|0|(null)|8300717|0|0|0|0|0|0|0|0|0|0|0|0|0|Idle|0|0|0|100|100|0|0|0|0|0|8|0|0|0|0|0|24576|24576|24576$$(null)|0|(null)|8300717|0|0|0|578|4|0|0|0|0|0|0|0|0|System|0|114688|274432|17|0|0|0|0|8|114688|124|0|0|0|3469312|8908800|311296|5693440|61440$$(null)|4|(null)|8300717|0|0|0|42|280|0|0|0|0|0|0|0|0|smss|0|782336|884736|110|0|0|1864|10664|11|782336|3|0|0|0|5701632|19357696|1388544|1417216|700416$$(null)|372|(null)|8300715|0|0|0|1438|380|0|0|0|0|0|0|0|0|csrss|0|3624960|3747840|0|0|0|15008|157544|13|3624960|10|0|0|0|54886400|55345152|5586944|5648384|2838528$$(null)|424|(null)|8300714|0|0|0|71|432|0|0|0|0|0|0|0|0|csrss#1|0|8605696|8728576|0|0|0|8720|96384|13|8605696|9|0|0|0|50515968|50909184|7438336|9342976|4972544

now I want to find data who's value is PercentProcessorTime, ElapsedTime,.. so for this I first split above string $$ and then again split string using | and this new split string I searched string where PercentProcessorTime' presents and get Index of that string when I get string then skipped first two arrays which split from$$and get data ofPercentProcessorTime` using index , it's looks like complicated but I think following code should helps

// First split string as below

val processData = winProcessData.split("\\$\\$")


// get index here
  val getIndex: Int = processData.find(part => part.contains("PercentProcessorTime"))
  .map {
    case getData =>
      getData

  } match {
    case Some(s) => s.split("\\|").indexOf("PercentProcessorTime")
    case None => -1
  }
 val getIndexOfElapsedTime: Int = processData.find(part => part.contains("ElapsedTime"))
  .map {
    case getData =>
      getData

  } match {
    case Some(s) => s.split("\\|").indexOf("ElapsedTime")
    case None => -1
  }
 // now fetch data of above index as below
for (i <- 2 to (processData.length - 1)) {
    val getValues = processData(i).split("\\|")
    val getPercentProcessTime = getValues(getIndex).toFloat
    val getElapsedTime = getValues(getIndexOfElapsedTime).toFloat
    Logger.info("("+getPercentProcessTime+","+getElapsedTime+"),")
  }

Now Problem is that using above code I was getting data of given key in index, so my output was (8300717,100),(8300717,17)(8300717,110)... Now I want sort this data using getPercentProcessTime so my output should be (8300717,110),(8300717,100)(8300717,17)... and that data should be in lists so I will pass list to case class.

by yogesh at September 18, 2014 01:52 PM

QuantOverflow

Analog - Pattern Recognition model using KNN

I'm building a pattern recognition model for my master thesis. The idea is to build a framework with some Macro variables (long/short term rates; rates differential; equity; fx; vix) in order to find wich asset class (or investment style or strategy) would perform better on the current period, based on similarities with historical data. For that purpose I am using the K-nearest neighbour algorithm. I would like to ask sugestions regarding not only the quantitative method (KNN) but also the most significant macro variables to use.

I also would like to ask if you know any relevant literature regarding this or any similar theme?

Thanks in advance

by goncalogc at September 18, 2014 01:49 PM

StackOverflow

Why do you need to create these json read/write when in java you didn't have to?

Please correct me if I am wrong, but when using Java with say spring mvc you didn't have to create these extra classes to map your Java class to json and json => class.

Why do you have to do this in Play with scala? Is it something to do with scala?

case class Location(lat: Double, long: Double)

implicit val locationWrites: Writes[Location] = (
  (JsPath \ "lat").write[Double] and
  (JsPath \ "long").write[Double]
)(unlift(Location.unapply))


implicit val locationReads: Reads[Location] = (
  (JsPath \ "lat").read[Double] and
  (JsPath \ "long").read[Double]
)(Location.apply _)

by public static at September 18, 2014 01:45 PM

PlayFramework how to access routes in view from other submodule view

I am writting application in Play Framework using submodules:

  • common
  • shopping
  • ctd

I would like to have in my view link to view from other module. /modules/common/app/views/choose.scala.html:

@main() {
   <a href="@controllers.common.routes.Index.index">common module</a>
   <a href="@controllers.shopping.routes.Index.index">shopping module</a>
}

That code gives me an error:

Compilation error
object shopping is not a member of package controllers
        <a href="@controllers.shopping.routes.Index.index">

Please help me. How can I make this code compile correctly?

My main routes file:

# Include sub projects
->  /               common.Routes
->  /cirs           ctd.Routes
->  /shopping       shopping.Routes

My shopping.routes file:

GET    /                                   controllers.shopping.Index.index()

Problem lies with that the play framework dosnt see my controllers route that are in other package than the view that I call from an other module views. How can I fix it?

by masterdany88 at September 18, 2014 01:44 PM

/r/emacs

StackOverflow

Type weirdness with Map#getOrElse

Consider

scala> val m = Map('a -> 3, 'b -> 4)
m: scala.collection.immutable.Map[Symbol,Int] = Map('a -> 3, 'b -> 4)

scala> val d: Double = m.getOrElse('c, 0)
<console>:8: error: type mismatch;
 found   : AnyVal
 required: Double
       val d: Double = m.getOrElse('c, 0)
                                  ^

scala> m.getOrElse('c, 0)
res0: Int = 0

scala> m.getOrElse('a, 0)
res1: Int = 3

Why is it that Scala thinks that the getOrElse call returns AnyVal even tho it obviously returns an Int?

Furthermore, even this fails with the same error:

scala> val x: Double = m.getOrElse('a, 0): Double
<console>:8: error: type mismatch;
 found   : AnyVal
 required: Double
       val x: Double = m.getOrElse('a, 0): Double

This however works:

scala> val x: Double = m.getOrElse('a, 0): Int
x: Double = 3.0

This happens on 2.11.x; I have no tried it on 2.10.x.

by Erik Allik at September 18, 2014 01:38 PM

Configuration issue for Spray https server with self-signed certificate?

I am using Spray 1.3, Akka 2.3, and Scala 2.11 on Mac 10.9.4 to set up an HTTP server. I am following the Ch. 2 example in Manning's Akka in Action (sample code available here: https://github.com/RayRoestenburg/akka-in-action.git), which compiles, runs, and behaves as expected when I use http, but I am having trouble configuring it for use with https.

To run with https, I have generated a self-signed certificate as follows:

keytool -genkey -keyalg RSA -alias selfsigned -keystore myjks.jks -storepass abcdef -validity 360 -keysize 2048

Following this example, https://github.com/spray/spray/tree/v1.2-M8/examples/spray-can/simple-http-server/src/main/scala/spray/examples

I've added an SSL config class:

package com.goticks

import java.security.{SecureRandom, KeyStore}
import javax.net.ssl.{KeyManagerFactory, SSLContext, TrustManagerFactory}
import spray.io._

// for SSL support (if enabled in application.conf)
trait MySSLConfig {

  // if there is no SSLContext in scope implicitly the HttpServer uses the default SSLContext,
  // since we want non-default settings in this example we make a custom SSLContext available here
  implicit def sslContext: SSLContext = { 
    val keyStoreResource = "myjks.jks"
    val password = "abcdef"

    val keyStore = KeyStore.getInstance("jks")
    keyStore.load(getClass.getResourceAsStream(keyStoreResource), password.toCharArray)
    val keyManagerFactory = KeyManagerFactory.getInstance("SunX509")
    keyManagerFactory.init(keyStore, password.toCharArray)
    val trustManagerFactory = TrustManagerFactory.getInstance("SunX509")
    trustManagerFactory.init(keyStore)
    val context = SSLContext.getInstance("TLS")
    context.init(keyManagerFactory.getKeyManagers, trustManagerFactory.getTrustManagers, new SecureRandom)
    context
  }

  // if there is no ServerSSLEngineProvider in scope implicitly the HttpServer uses the default one,
  // since we want to explicitly enable cipher suites and protocols we make a custom ServerSSLEngineProvider
  // available here
  implicit def sslEngineProvider: ServerSSLEngineProvider = { 
    ServerSSLEngineProvider { engine =>
      engine.setEnabledCipherSuites(Array("TLS_RSA_WITH_AES_256_CBC_SHA"))
      engine.setEnabledProtocols(Array("SSLv3", "TLSv1"))
      engine
    }   
  }
}

I've updated the Main class to use the SSL config:

package com.goticks

import akka.actor._
import akka.io.IO

import spray.can.Http
import spray.can.server._

import com.typesafe.config.ConfigFactory

object Main extends App with MySSLConfig {
  val config = ConfigFactory.load()
  val host = config.getString("http.host")
  val port = config.getInt("http.port")

  implicit val system = ActorSystem("goticks")

  val api = system.actorOf(Props(new RestInterface()), "httpInterface")
  IO(Http) ! Http.Bind(listener = api, interface = host, port = port)
}

and I've updated the application.conf:

spray {
  can {
    server {
      server-header = "GoTicks.com REST API"
      ssl-encryption = on
    }
  }
}

After compiling and running the server, I get the following error when I try to do an https GET:

[ERROR] [09/15/2014 10:40:48.056] [goticks-akka.actor.default-dispatcher-4] [akka://goticks/user/IO-HTTP/listener-0/7] Aborting encrypted connection to localhost/0:0:0:0:0:0:0:1%0:59617 due to [SSLHandshakeException:no cipher suites in common] -> [SSLHandshakeException:no cipher suites in common]

I'm not sure if my problem is with the generated key, or with my configuration. Incidentally, my final goal is to use this configuration with a TCP socket (see my other question: TCP socket with SSL on Scala with Akka), but I was unable to find documentation for running secure TCP, so I thought I would start with HTTPS.

Any help is appreciated.

by Ampers4nd at September 18, 2014 01:34 PM

Validating Unique Entry?

I'm kind of surprised this is not covered in any documentation that I've read, or I've simply overlooked it. Validating unique entries seems like something that should be common place.

When creating a new entry from a form, what is the preferred method of checking uniqueness of the member?

val memberForm = Form(
    mapping(
      "id" -> ignored(NotAssigned:Pk[Long]),
      "membername" -> nonEmptyText,
      "email" -> email,
      "password" -> nonEmptyText
    )(Member.apply)(Member.unapply)
)

Is the preferred method to create a custom validator?

val validateMember(name: String, email: String) = {
    // check unique name & email
}

Or should this be done some other way?

by bad at scala at September 18, 2014 01:26 PM

Dave Winer

Simple: When you think of something funny when reading someone else's message, ask yourself if they'll get it. If not, don't send it.

September 18, 2014 01:25 PM

TheoryOverflow

How to map random cardesian points in a 2d array

I was wondering if there is any algorithm, theoretical or already implemented, or if its even possible at all, where, given N random points (x,y) we can relatively map them in a 2d array where each mapped point in the 2d-array has the property that its neighboring points (at most 8 points around it) are the closest to that point.

I would also like to add an extra restriction that no node in the 2d array is NULL meaning that if we have for example N = 1 Million elements then the 2d array would be sqrt(N) x sqrt(N) or 1000 x 1000 dimension, where each element of that array occupies a point.

In addition, we can also consider sorting, if it can help, as of x-coordinate and as of y-coordinate...

Thanks in advance!

by Geo Papas at September 18, 2014 01:24 PM

/r/clojure

StackOverflow

avoid making a play app for a core module

have a modularized application

my-core
my-module1
my-module2

my-core is a regular sbt project while other modules are play apps. Now as much as I wanted to avoid making my-core a play app (for no specific reason but because my-core never serves any endpoints but only house core code which is used by all others like my-module1, my-module2 etc). now situation is that other play modules my-module1 and my-module2 are depending on play to do things like

lazy val db: Database = {
if (play.api.Play.isTest(play.api.Play.current)) dosmething
else if (play.api.Play.isDev(play.api.Play.current)) dosomething else
else dosomething else

}

Above code should not be repeated in all modules but live in core. That brings up a situation to make my-core a play application. I am stating the obvious here but do you see any way to still keep my-core simple sbt project and not play?

by user2066049 at September 18, 2014 01:12 PM

/r/clojure

CompsciOverflow

Virtualization on a server and logging in over network [on hold]

At work I have computers that, when turned on, they automatically connect onto a server, and when you login in, your logging into a server database and a virtualized computer is created and you work on that virtualized computer.

My question is what is really going on and what programs can I look into to explore and play around with this at home?

This is the best explanation I can give, however, I am no exactly familiar with the process. I am a programmer and I do work with computers, so feel free to explain in what ever way you can. Thank you for the response.

by user71666 at September 18, 2014 01:03 PM

StackOverflow

Release management using SBT

I am new to Scala, we are doing a prototype using Scala and SBT as build tool. Can someone help me how to release WAR files using SBT to artifactory. In JAVA I am using maven and specifying artifactory urls in settings.xml and I am specifying snapshot or release versions in pom file. I have no clue how to achieve this functionality using SBT.

Thanks in advance

Prasad

by Prasad at September 18, 2014 12:49 PM

/r/emacs

Phonegap developing?

Is there any good mode in order to develop with PhoneGap (i.e. JavaScript)? Currently I just work with ECB for C++ but I don't know what's better for this new stuff.

Thank you.

submitted by Acktung
[link] [2 comments]

September 18, 2014 12:48 PM

AWS

The AWS Loft Will Return on October 1st

As I promised earlier this year, the AWS Pop-up Loft is reopening on Wednesday, October 1st in San Francisco with a full calendar of events designed to help developers, architects, and entrepreneurs learn about and make use of AWS.

Come to the AWS Loft and meet 1:1 with an AWS technical expert, learn about AWS in detailed product sessions, and gain hands-on experience through our instructor-led Technical Bootcamps and our self-paced hands-on labs. Take a look at the Schedule of Events to learn more about what we have planned.

Hours and Location
The AWS Loft will be open Monday through Friday, 10 AM to 6 PM, with special evening events that will run until 8 PM. It is located at 925 Market Street in San Francisco.

Special Events
We are also setting up a series of events with AWS-powered startups and partners from the San Francisco area. The list is still being finalized but already includes cool companies like Runscope (Automated Testing for APIs and Backend Services), NPM (Node Package Manager), Circle CI (Continuous Integration and Deployment), Librato (Metrics, Monitoring, and Alerts), CoTap (Secure Mobile Messaging for Businesses), and Heroku (Cloud Application Platform).

A Little Help From Our Friends
AWS and Intel share a passion for innovation, along with a track record of helping startups to be successful. Intel will demonstrate the latest technologies at the AWS Loft, including products that support the Internet of Things and the newest Xeon processors. They will also host several talks.

The folks at Chef are also joining forces with the AWS Loft and will be bringing their DevOps expertise to the AWS Loft through hosted sessions and a training curriculum. You'll be able to learn about the Chef product — an automation platform for deploying and configuring IT infrastructure and applications in the data center and in the Cloud.

Watch This!
In order to get a taste for the variety of activities and the level of excitement you'll find at the AWS Loft, watch this short video:

Come Say Hello
I will be visiting and speaking at the AWS Loft in late October and hope to see and talk to you there!

-- Jeff;

by Jeff Barr (awseditor@amazon.com) at September 18, 2014 12:40 PM

StackOverflow

ZMQ Pub Sub - Should it be dropping messages?

Was trying to test a simple implementation of pub/sub. I am finding that if I leave subscriber up and send messages, they are not all received by Subscriber. Sometimes all are received, sometimes partial, sometimes whole set is not received.

Run the Subscriber ( leave it running) , followed by running the Publisher multiple times.

import org.zeromq.ZMQ;
import org.zeromq.ZMQ.Context;
import org.zeromq.ZMQ.Socket;


public static void main (String[] args) {

    // Prepare our context and subscriber
    Context context = ZMQ.context(1);
    Socket subscriber = context.socket(ZMQ.SUB);

    subscriber.connect("tcp://localhost:5563");
    subscriber.subscribe("B".getBytes());


    System.out.println("Starting Subscriber..");
    int i = 0;
    while (true) {
        String address = subscriber.recvStr();
        String contents = subscriber.recvStr();
        System.out.println(address+":"+new String(contents) + ": "+(i));
        i++;
    }

}

}

Publisher:

import org.zeromq.ZMQ;
import org.zeromq.ZMQ.Context;
import org.zeromq.ZMQ.Socket;


public class TestPublisher {

public static void main (String[] args) throws Exception {
    Context context = ZMQ.context(1);
    Socket publisher = context.socket(ZMQ.PUB);
    publisher.bind("tcp://*:5563");
    System.out.println("Starting Publisher..");
    publisher.setIdentity("B".getBytes());
    publisher.setHWM(1000);
    for (int i = 0; i < 10; i++) {
        Thread.sleep(10l);
        publisher.sendMore("B");
        boolean isSent = publisher.send("We would like to see this:"+i);
        System.out.println("Message was sent "+i+" , "+isSent);
    }

    Thread.sleep(1000);
    publisher.close ();
    context.term ();
}

}

by Bobby Z at September 18, 2014 12:36 PM

ZeroMQ subscribing in Delphi

I'm currently using ZeroMQ to send serialized data objects from a Python server to a Delphi client. As you probably know there are different kinds of models for a ZeroMQ connection, I'm using both request/reply and publish/subscribe.

The request/reply model works great, no problems at all. But when I'm trying to subscribe for data in the Delphi client, data which come from the ZeroMQ publisher written in Python, I'm having trouble getting the data. The code I use for subscribing to the data looks like this:

PubZeroMQ.zErr( ctx=nil );
skt_pub := zmq_socket(ctx, ZMQ_SUB);
PubZeroMQ.zErr( ctx=nil );
zmq_setsockopt( skt_pub, ZMQ_SUBSCRIBE, 0, 0);
PubZeroMQ.zErr( 0<>zmq_connect(skt_pub, 'tcp://*:6001'));

And then I try to read data using this code:

lvbytes := PubZeroMQ.zSubscribe(skt_pub);

zSubscribe looks like this:

zmq_msg_init(@m);
r := zmq_recv(zSocket, @m, 0);
zErr( (r<>0) and (zmq_errno()<>11) ); // 11 = Resource temporarily unavailable
r := zmq_msg_size(@m);
SetLength(Result, r);
System.Move( zmq_msg_data(@m)^, Result[1], r );

I tried debugging and my program gets stuck at zmq_recv(zSocket, @m, 0); as if it's waiting for some data but doesn't get it. The zmq_recv function looks like this

function  zmq_recv(s : Pointer; msg : Pzmq_msg_t; flags : Integer) : Integer; cdecl;

The connection seems to be fine as I don't have any problems using the request/reply model and also I'm sure the python server is publishing data as I have another client in C++ which works great. Am I doing the subscription wrong?

Here is how I'm doing the subscribing in the c++ client

void* subscriber = zmq_socket( context, ZMQ_SUB );
zmq_setsockopt( subscriber, ZMQ_SUBSCRIBE, "", 0 );
zmq_connect( subscriber, "tcp://*:6001" );

And the read:

const unsigned int BUFF_SIZE = 1024;
char buffer[BUFF_SIZE];
size_t size;

size = zmq_recv( subscriber, (void*)&buffer[0], BUFF_SIZE, 0 );

Thanks in advance!

by Araz at September 18, 2014 12:01 PM

clojure conch's shell api doesn't show result for external commands like java

I want to use clojure script for development tasks on my project, like running project using clojure script etc.

I used lein-off lein plugin and clojure conch library for shell api, script is as below (dubofsky.clj)

(defdeps                                                                                                                                                                         
    [[org.clojure/clojure "1.6.0"]                                                                                                                       
     [me.raynes/conch "0.8.0"]])                                                                    

(ns deploy                                                                                          
    (use [clojure.java.shell :only  [sh]])                                                          
    (:require [me.raynes.conch.low-level :as shell]))

  (sh "sh" "-c" "echo hello > /dev/null")
  (print (shell/stream-to-string (shell/proc "echo" "....................................") :out))
  (print (shell/stream-to-string (shell/proc "echo" "Project starting...") :out))
  (print (shell/stream-to-string (shell/proc "grails" "run-app") :out))
  (print (shell/stream-to-string (shell/proc "bash" "--version") :out))

However on lein oneoff dubofsky.clj, it doesn't seem to print output of external commands like java, grails etc.

Result for above script is

$ lein oneoff --exec devops.clj 
....................................
Project starting...
GNU bash, version 4.3.11(1)-release (x86_64-pc-linux-gnu)
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>

This is free software; you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

This is exactly similar problem to mine

by Aman Maharjan at September 18, 2014 11:59 AM

Using Prismatic/schema for form validation with Liberator

Is Prismatic/schema a good fit for form validation? I have never designed a form validation lib before, but I imagine it outputting something like this instead of throwing exceptions on s/validate:

{::errors {:name [{:missing "Required field."}
                  {:length "Must be at least 3 characters."}]
           :email [{:email "Must be a valid email address"}]}}

I'm hoping someone has used it with Compojure and Liberator, but I can't find anything online.

by pate at September 18, 2014 11:50 AM

/r/emacs

Is there any good CoffeeScript mode for Emacs ?

Is there any good CoffeeScript mode for Emacs ? I know that there is a coffeescripe-mode but it is horrible

submitted by lxsameer
[link] [1 comment]

September 18, 2014 11:40 AM

Planet Emacsen

Irreal: Launching Emacs Features

Artur Malabarba has a nice riff on his toggling keymap that I wrote about here. This time he considers launching Emacs commands such as calc, ediff-buffers, man, shell, proced, and others. As Malabarba says, these are useful commands that you don't use that often so it's handy having a way of launching them with an easily remembered key sequence.

The idea is the same as it was for toggling modes. You create a key map attached to 【Ctrl+x l1 in such as way that the keys are intuitive and easy to remember. Thus 【Ctrl+x l c】 launches calc.

As with his toggling post, you will probably have different commands you want to bind to your keymap but his method is completely general. If you have commands that you rarely launch, you may find his post useful.

Footnotes:

1

Or some other sequence such as 【Ctrl+c l】 if you're worried about future conflicts.

by jcs at September 18, 2014 11:22 AM

StackOverflow

IntelliJ IDEA Hotkey for comment does not work with Scala - what I am doing wrong?

None of the hotkeys (Ctrl+Slash or Ctrl+Divide, Ctrl+Shift+Slash or Ctrl+Shift+Divide) as mentiond under http://www.jetbrains.com/idea/webhelp/keyboard-shortcuts-you-cannot-miss.html

does work with my IntelliJ IDEA 11.1.2. Installation under Windows 7, 64 Bit. I use a German keyboard layout. What I am doing wrong ?

by John Threepwood at September 18, 2014 11:21 AM

getting rid of _1: and _2: in LinkedHashMap by changing insertion code

val hostSQLList = jsonObj.getOrElse("hostSQLList", " ").asInstanceOf[List[String]]

var sqlRepMap = mutable.LinkedHashMap[String, Any]()
var sqlStatusMap = mutable.LinkedHashMap[String, Any]()
var sqlSlowQuery = mutable.LinkedHashMap[String, Any]()

hostSQLList.foreach(hostSQL => {
  if (hostSQL != null) {
    sqlRepMap ++= mutable.LinkedHashMap(hostSQL -> AlertDashboard.checkSQLReplicationLag(hostSQL))

    sqlStatusMap ++= mutable.LinkedHashMap(hostSQL -> AlertDashboard.checkSQLStatus(hostSQL))

    sqlSlowQuery ++= mutable.LinkedHashMap(hostSQL -> AlertDashboard.checkSQLSlowQuery(hostSQL))
  }

  else {
    sqlRepMap ++= mutable.LinkedHashMap(hostSQL -> "Found nothing")
    sqlStatusMap ++= mutable.LinkedHashMap(hostSQL -> "Found nothing")
    sqlSlowQuery ++= mutable.LinkedHashMap(hostSQL -> "Found nothing")
  }
})

This is giving me the following output :

{  
   "sqlRepMap":[  
      {  
         "_1":"xyz.com",
         "_2":{  
            "18/09/2014 15:00:39_0":0.0,
            "18/09/2014 15:30:22_0":0.0,
            "18/09/2014 15:49:26_0":0.0
         }
      }
   ],
   "sqlStatusMap":[  
      {  
         "_1":"xyz.com",
         "_2":{  
            "18/09/2014 15:00:39_0":1,
            "18/09/2014 15:30:22_0":1,
            "18/09/2014 15:49:26_0":1
         }
      }
   ],
   "sqlSlowQuery":[  
      {  
         "_1":"xyz.com",
         "_2":{  
            "18/09/2014 15:00:39_0":0,
            "18/09/2014 15:30:22_0":0,
            "18/09/2014 15:49:26_0":0
         }
      }
   ]
}

When I actually want something like this :

{  
   "sqlRepMap":[  
      {  
         "xyz.com":{  
            "18/09/2014 15:00:39_0":0.0,
            "18/09/2014 15:30:22_0":0.0,
            "18/09/2014 15:49:26_0":0.0
         }
      }
   ],
   "sqlStatusMap":[  
      {  
         "xyz.com":{  
            "18/09/2014 15:00:39_0":1,
            "18/09/2014 15:30:22_0":1,
            "18/09/2014 15:49:26_0":1
         }
      }
   ],
   "sqlSlowQuery":[  
      {  
         "xyz.com":{  
            "18/09/2014 15:00:39_0":0,
            "18/09/2014 15:30:22_0":0,
            "18/09/2014 15:49:26_0":0
         }
      }
   ]
}

The same code for insertion in a normal Map is giving me what I want but for a LinkedHashMap it somehow isnt working. Or could there be something wrong with my JSON ParseR? Thanks in advance!

by user3851565 at September 18, 2014 11:14 AM

QuantOverflow

Solving Black-Scholes PDE using Laplace transform

I'm trying to obtain the Laplace transform of Call option price with repect to time to maturity under the CEV process.

The well known Black scholes PDE is given by $$ \frac{1}{2}\sigma(x)^2x^2\frac{\partial^2}{\partial x^2}C(x,\tau)+\mu x\frac{\partial}{\partial x}C(x,\tau)-rC(x,\tau)-\frac{\partial}{\partial \tau}C(x,\tau)=0. $$ where the initial condition $C(x,0)=max(x-K,0)$ and $\sigma(x)=\delta x^\beta$.

Taking the Laplace transform with respect to $\tau$, we obtain the following ODE: $$ \frac{1}{2}\delta x^{2\beta+2}\frac{\partial^2}{\partial x^2}\hat{C}(x,\lambda)+\mu x\frac{\partial}{\partial x}\hat{C}(x,\lambda)-(\lambda+r)\hat{C}(x,\lambda)=-max(x-K,0). $$ where $\hat{C}(x,\lambda)=\int_0^\infty e^{-\lambda \tau}C(x,\tau)d\tau$

and the initial condition is transformed to $$ \hat{C}(x,\lambda)=\int_0^\infty e^{-\lambda \tau}C(x,0) d\tau=max(x-K,0)/\lambda $$(is this right??? it seems wrong..)

Then, $\hat{C}(x,\lambda)$ can be analytically formulated by the case $x>K$ and $x\leq K$.

How to get explicit formula for $\hat{C}(x,\tau)$? I can't proceed from this stage.

I know one paper, "(2001 Dmitry) Pricing and Hedging Path-Dependent Options under the CEV", related to this question. However, there's big jumps for me to understand readily. Could you explain it step by step?

by user155214 at September 18, 2014 11:11 AM

StackOverflow

Change Return Program

My methodology is to attempt a problem using imperative code and then to attempt the same problem again using idiomatic functional code.

Here is the problem I am working through at the moment:

Change Return Program - The user enters a cost and then the amount of money given. The program will figure out the change and the number of quarters, dimes, nickels, pennies needed for the change.

Here is my naïve (imperative) solution:

let cost = 1.10m
let amountGiven = 2.00m
let mutable change = amountGiven - cost

while change <> 0m do
    if change >= 0.25m then
        Console.WriteLine "quater"
        change <- change - 0.25m
    else if change >= 0.10m then
        Console.WriteLine "dime"
        change <- change - 0.10m
    else if change >= 0.05m then
        Console.WriteLine "nickel"
        change <- change - 0.05m
    else if change >= 0.01m then
        Console.WriteLine "penny"
        change <- change - 0.01m

How can I write this code using functional constructs (i.e. without mutable)?

by Caster Troy at September 18, 2014 11:04 AM

CompsciOverflow

How exactly MOV AX will load data from RAM?

Somewhere on Internet I read : Whenever word size is greater than memory cell size, then there is a need for accessing multiple memory cell
Example:
for 16 bit processor:
MOV AX [2000]
To transfer memory content, we need to transfer 16 bit data , so we need to access 2 memory cells
M[2000 H]
&
M[2001 H]
But I didnt get this. Why 2 cells ?
I interpreted as follows:
1) 8 bit binary code will be fetched which decoded as MOV AX.
2) From next memory location , (00 H ) will be fetched as memory is byte addressable.
3) From next memory location , (20 H ) will be fetched.
4) In IR register it is treated as ( 2000 H ) and data at (2000 H) memory location is fetched and moved to AX register.
So why to access M[2001 H] memory location ??

by user1745866 at September 18, 2014 11:02 AM

StackOverflow

How could I retrieve and store random UUIDS in zeromq sockets

I need to communicate between multiple clients. When i try to run file (multiple terminals) i get same identity. So I let router socket to automatically set the UUID. But what i found I cannot use that identity to store at server for routing between multiple clients. How Would I handle multiple clients ID. I am trying to build an asynchronous Chat server. I am following an approach of each client with dealer socket connects to server(router sockets) server then determine multiple clients IDs and reads the message and route accordingly.

by monsterrrrr at September 18, 2014 10:40 AM

Can't set memory settings for `sbt start`

I'm trying to run sbt start in a Play Framework application written in Scala, on a machine that is an ec2 t2.micro instance on AWS. But i can't because There is insufficient memory for the Java Runtime Environment to continue.

The machine has 1GB of memory, but in practice 930MB of free memory to use while running the remaining of OS processes. It is Ubuntu Server 14.04 LTS. The app is small, cute.

Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000d5550000, 715849728, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 715849728 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /app/incoming/hs_err_pid9709.log

Here is the link to the log file for more info.

Inside i see jvm_args: -Xms1024m -Xmx1024m -XX:ReservedCodeCacheSize=128m ... despite i set my JVM args in so many different ways to something else but no effect.

With these arguments -Xss1m -Xms256m -Xmx512m -XX:+CMSClassUnloadingEnabled I tried everything:

  • setting JVM args in /usr/share/sbt-launcher-packaging/conf/sbtopts
  • same in /usr/share/sbt-launcher-packaging/conf/sbtconfig.txt
  • supplying the args directly when running: sbt -J-Xss1m -J-Xms256m -J-Xmx512m -J-XX:+CMSClassUnloadingEnabled start
  • already set fork in run := true in build.sbt
  • javaOptions in run += "-Xmx512m -XX:+CMSClassUnloadingEnabled" in build.sbt

Neither of them helps. The same lame 1024 things appear in the logs every time i run the app. Please help.

by Kovács Hunor at September 18, 2014 10:38 AM

TheoryOverflow

Explaining monad transformers in categorical terms

Most resource regarding categorical notions in programming describe monads, but I've never seen a categorical description of monad transformers.

How could monad transformers be described in the terms of category theory?

In particular, I'd be interested in:

  • the relationship between monad transformers and their corresponding base monads;
  • the relationship between them and the monads they're transforming into new monads;
  • monad transformer stacks.

by Petr Pudlák at September 18, 2014 10:38 AM

/r/compsci

CompsciOverflow

Solving a recurrence relation with √n as parameter

Consider the recurrence

$\qquad\displaystyle T(n) = \sqrt{n} \cdot T\bigl(\sqrt{n}\bigr) + c\,n$

for $n \gt 2$ with some positive constant $c$, and $T(2) = 1$.

I know the Master theorem for solving recurrences, but I'm not sure as to how we could solve this relation using it. How do you approach the square root parameter?

by KodeSeeker at September 18, 2014 10:10 AM

StackOverflow

Datomic component ids

I want to transact a deeply nested tree structure into Datomic. An example data structure:

{:tree/id (d/tempid :db.part/user),
 :tree/nodes [
   {:node/name "Node1",
    :node/parent "root-node-ref",
    :node/tasks {"task-entities-map"}},
   {:node/name "Node2",
    :node/parent "node1-ref",
    :node/tasks {"task-entities-map"}}]}

Here Node2 is a child of Node1, and Node1 is a child of some root node.

Datomic docs at http://blog.datomic.com/2013/06/component-entities.html indicate that it's not necessary to specify temp ids for nested maps, as they will be created automatically (given that :db/isComponent is set to true for :tree/nodes, :node/tasks etc.).

The question is: how do i specify the parent-child relations here, as given by :node/parent attributes? I want to avoid having to specify node children e.g. by :node/children attribute. And will Datomic automatically specify temp ids for the entities in :node/tasks lists?

Thanks in advance.

by siphiuel at September 18, 2014 09:41 AM

CompsciOverflow

Data structure for counting bits set on a table

I have a table that contains only bits. I would like to be able to do the following two queries:

  • SET any bit to 0 or 1;
  • GET the number of bits that are set to 1 from the beginning of the table up to a specifique index i.

Is there a data structure that is more efficient than a binary indexed tree, i.e. a Fenwick tree? A binary indexed tree performs both operations in $O(\log n)$ time, where $n$ is the number of elements.

For practical info, my tables are really large, containing up to millions of elements.

by Issam T. at September 18, 2014 09:31 AM

QuantOverflow

Girsanov Theorem and Quadratic Variation

Girsanov theorem seems to have many different forms. I've got a problem matching the form in wiki to the one in Shreve's book, due to the difficulty of quadratic variation calculation.

Below is the Girsanov Theorem from wiki:

Let $\{W_t\}$ be a Wiener process on the Wiener probability space $\{\Omega,\mathcal{F},P\}$. Let $X_t$ be a measurable process adapted to the natural filtration of the Wiener process $\{\mathcal{F}^W_t\}$.

Given an adapted process $X_t$ with $X_0 = 0$, define $Z_t=\mathcal{E}(X)_t,\,$ where $\mathcal{E}(X)$ is the stochastic exponential (or Doléans exponential) of X with respect to W, i.e. $\mathcal{E}(X)_t=\exp \left ( X_t - \frac{1}{2} [X]_t \right )$, where $[X]_t$ is a quadratic variation for $X_t$. Thus $Z_t$ is a strictly positive local martingale, and a probability measure $Q$ can be defined on $\{\Omega,\mathcal{F}\}$ such that we have Radon–Nikodym derivative $\frac{d Q}{d P} |_{\mathcal{F}_t} = Z_t = \mathcal{E} (X)_t$. Then for each $t$ the measure $Q$ restricted to the unaugmented sigma fields $\mathcal{F}^W_t$ is equivalent to $P$ restricted to $\mathcal{F}^W_t.\,$

Furthermore if $Y$ is a local martingale under $P$ then the process $\tilde Y_t = Y_t - \left[ Y,X \right]_t$ is a $Q$ local martingale on the filtered probability space $\{\Omega,F,Q,\{F^W_t\}\}$.

Below is the Girsanov Theorem from Shreve's book "Stochastic calculus for finance II"

Theorem 5.2.3 (Girsanov, one dimension). Let $W(t)$, $0 \leq t \leq T$, be a Brownian motion on a probability space $(\Omega, \mathscr F, \mathbb P)$, and let $\mathscr F(t)$, $0 \leq t \leq T$, be a filtration for this Brownian motion. Let $\Theta(t)$, $0 \leq t \leq T$, be an adapted process. Define $$Z(t) = \text{exp} \left\{ -\int_0^t \Theta(u)dW(u) - \frac{1}{2} \int_0^t \Theta^2(u) du \right \}, \tag{5.2.11}$$ $$\widetilde W(t) = W(t) + \int_0^t \Theta(u) du, \tag{5.2.12}$$ and assume that $$\mathbb E \int_0^T \Theta^2(u) Z^2(u) du < \infty \tag{5.2.13}$$

Set $Z = Z(T)$. Then $\mathbb E Z = 1$ and under the probability measure $\widetilde P$ given by (5.2.1), the process $\widetilde W(t)$, $0 \leq t \leq T$, is a Brownian motion.

Seems the Girsanov theorem form wiki is more general than the one on Shreve's book.

Now my questions is: How to derive the latter from the former?

It seems only need to take $Y(t) = W(t)$ and $X(t) = \int_0^t \Theta(u) du$ in the wiki definition. This left to prove

$$[W(t), \int_0^t \Theta(u) du]_t = - \int_0^t \Theta(u) du$$

, but how to calculate the quadratic variation?

Quadratic variation definition is $$[X,Y](T) := \lim_{\|\Pi\|\to 0} \sum_{j=0}^{n-1} \left[ X(t_{j+1}) - X(t_j) \right] \left[ Y(t_{j+1}) - Y(t_j) \right]$$ , where $\Pi := \{ t_0, t_1, \cdots, t_n \}$ . But I'm a bit stuck here.

Could you please kindly give me some hint how to proceed?

by athos at September 18, 2014 09:28 AM

UnixOverflow

bloated font size rendered in webkit based browsers

Recently after a complete ports update in FreeBSD-10-p7 X86_64, my BSD system is also infected with font problem under webkit browsers such as vimb and dwb or even xombrero.
But the difference is while I have fuzzy rendring problem in webkit under Linux (1, 2); under FreeBSD I have fuzzy fonts and bloated font rendering. (see the picture below).

enter image description here

In any case what is the solution for bloated exploding font rendring in webkit.

by r004 at September 18, 2014 09:28 AM

StackOverflow

OAuth2 provider for Scalatra or Play framework in Scala

Is there any OAuth2 provider available for Scala that I can use with Scalatra or Play2 web framework?

I have already seen this answer: OAuth 2.0 provider implementation for Scala/Lift

I am looking for a provider library and not an OAuth2.0 client library.

Edit:

Scala OAuth2.0 Provider was what I was looking for: http://tuxdna.in/blog/2014/07/09/oauth2-dot-0-server-using-play-2-dot-0-framework-in-scala/

by tuxdna at September 18, 2014 09:08 AM

How can I parse a 2D array in a query string?

I have the following jQuery call (which I can't change):

$.ajax({    url: "http://localhost/lookup-coord",
            jsonp: 'cb',
            dataType: 'jsonp',
            data: {'coord' : [[1,1],[1,2],[1,3]]},
            success: function(response) {
                console.log("Got the response: " + response);
            }
      });

My web application receives the following query string:

coord[0][]=1&coord[0][]=1&coord[1][]=1&coord[1][]=2&coord[2][]=1&coord[2][]=3

I want to know how I can parse that query string into an array in Scala Play. Is there an out of the box way to do this?

by cm22 at September 18, 2014 09:07 AM

CompsciOverflow

How does interpreting a script work?

Suppose I have a script (.vbs, for example) that is stored in a file. How does the code in the file get converted into machine instructions? What is between the vbs file and the processor?

by developer747 at September 18, 2014 09:06 AM

Are there established complexity classes with real numbers?

A student recently asked me to check an NP-hardness proof for them. They performed a reduction along the lines of:

I reduce this problem $P'$ that is known to be NP-complete to my problem $P$ (with a poly-time many-one reduction), so $P$ is NP-hard.

My answer was basically:

Since $P$ has instances with values from $\mathbb{R}$, it's trivially not Turing-computable so you can skip the reduction.

While formally true, I don't think this approach is insightful: we'd certainly like to be able to capture the "inherent complexity" of a real-valued decision (or optimisation) problem, ignoring the limitations we face in dealing with real numbers; investigating these issues is for another day.

It is, of course, not always as easy as saying, "the discrete version of Subset Sum is NP-complete, so the continuous version is 'NP-hard' as well". In this case, the reduction is easy but there are famous cases of the continuous version being easier, e.g. linear vs. integer programming.

It occurred to me that the RAM model naturally extends to real numbers; let every register store a real number and extend the basic operations accordingly. The uniform cost model still makes sense -- as much as in the discrete case, anyway -- while the logarithmic one does not.

So, my question boils down to: are there established notions of complexity of real-valued problems? How do they relate to the "standard" discrete classes?

Google searches yield some results, e.g. this, but I have no way of telling what is established and/or useful and what is not.

by Raphael at September 18, 2014 09:06 AM

StackOverflow

How can I make Light Table automatically close curly braces and square brackets?

If I type ( I get () but that doesn't work for { or [. Any Idea why?

What should I do to make it work?

BTW, I am using a French Canadian keyboard (Mac OSX).

Thanks!

by leontalbot at September 18, 2014 09:00 AM

Case class to map in Scala

Does anyone know if there is a nice way I can convert a Scala case class instance, e.g.

case class MyClass(param1: String, param2: String)
val x = MyClass("hello", "world")

Into a mapping of some kind, e.g.

getCCParams(x) returns "param1" -> "hi", "param2" -> "3"

Which works for any case class, not just predefined ones. I've found you can pull the case class name out by writing a method that interrogates the underlying Product class, e.g.

def getCCName(caseobj: Product) = caseobj.productPrefix 
getCCName(x) returns "MyClass"

So I'm looking for a similar solution but for the case class fields. I'd imagine a solution might have to use Java reflection, but I'd hate to write something that might break in a future release of Scala if the underlying implementation of case classes changes.

Currently I'm working on a Scala server and defining the protocol and all its messages and exceptions using case classes, as they are such a beautiful, concise construct for this. But I then need to translate them into a Java map to send over the messaging layer for any client implementation to use. My current implementation just defines a translation for each case class separately, but it would be nice to find a generalised solution.

by Will at September 18, 2014 08:23 AM

TheoryOverflow

tagging and graph “compression”

I have a question on stack-overflow about "compressing" a graph. Suppose I have tags from a finite set $T$ and objects from a finite set $O$. Moreover there are (uni-directional) links from elements of set of $T$ to elements of set $O$. For example all the links are of the form

$(a,b)$ where $a$ belongs to $T$ and $b$ belongs to $O$. The set $O$ can be huge compared to $T$ and consequently the graph can be huge. But let me give you an example

Suppose $O=\{o_1,o_2,o_3\}$ and $T=\{t_1,t_2,t_3\}$

and I have the full set of allowed links

If I insert in the graph a hidden node $h$ then I can create a graph with links $(o_1,h), (o_2,h), (o_3,h)$ and $(h,t_1),(h,t_2),(h,t_3)$

Then the tagging is preserved if we define that :

"object $o$ has tag $t$ if there is a path from $o$ to $t$".

The definition as you see is invariant with the hidden node.

Moreover while previously I had 9 links, now I have 6 while the number of nodes is increased by 1.

For a fully linked case with $N$ objects and $M$ tags the gain becomes

$(NM)/(N+M)$ which as $N$ increases the gain tends to $M$.

Do you know for such research? What category of problems this belongs? I am also very interested in the online case.

Thank you in advance.

by user2987581 at September 18, 2014 08:18 AM

/r/compsci

any useful resources for learning asymptotic notation, solving recurrences, etc?

So I'm in a data structures and algorithm analysis course at school (Penn State - CMPSC 465) and I'm having trouble understanding what seem like they should be simple concepts like theta-notation, how to compute running times given a piece of pseudocode, solving recurrences for running times, etc. I mostly understand the formal definitions, but for the life of me I don't understand how you can just look at a big running time equation and somehow say it's equal to Theta(n log n) or something like that.

Are there any resources that can give me some pointers on understanding these better? I'm planning on talking to my professor during office hours but I figured I'd ask here too because I want to figure this out before we get even farther into the material because it looks like everything is building on these concepts.

submitted by jboby93
[link] [comment]

September 18, 2014 08:16 AM

StackOverflow

Restricting select fields with Korma

I'm trying to restrict the columns returned from a select query to just one column, but Korma seems to just add the additional column to the default ones instead of using just this one:

=> (dry-run (select games (fields :white_id))) dry run :: SELECT "games"."stones", "games"."white_id", "games"."black_id", "games"."white_id" FROM "games" :: []

For reference:

=> (dry-run (select games )) dry run :: SELECT "games"."stones", "games"."white_id", "games"."black_id" FROM "games" :: []

What I'd like to see as the output is:

SELECT "games"."white_id" FROM "games";

Using latest Korma 0.4.0

How can I get that?

by Oin at September 18, 2014 08:15 AM

UnixOverflow

Configuring Apache 2.4 for CGI on FreeBSD

I am trying to run CGI on FreeBSD 9.2.

  1. I installed Apache 4.2 (pkg install apache42)
  2. Configured it to load CGI module.
  3. Also, I did chmod a+x on files in cgi-bin directory.

And when I connect to the server to a test CGI script, the server printed this error message.

AH01215: (13)Permission denied: exec of '/usr/local/www/apache24/cgi-bin/test-cgi' failed
End of script output before headers: test-cgi

What's wrong and how to fix this problem?

by Eonil at September 18, 2014 08:15 AM

StackOverflow

Verifying that generic type argument conforms to 2 unrelated types

In Scala one can specify type bound for generic argument.

For example, to ensure that A will conform SomeType1 one can do:

trait Example[A <: SomeType1]

Now, lets say I need to make sure that A conforms to 2 unrelated types SomeType1 and SomeType2.

Is there a way to do this?

by Eugeny Loy at September 18, 2014 07:42 AM

QuantOverflow

How to work out weights for a portfolio based on an inverse ratio with positive and negative values?

I am trying to work out how to determine weights for the assets in order to form a portfolio. The ratio I am using is EV/EBIT, hence the smaller the better. The problem is I don't know how to handle it when EV < 0. Obviously that is kind of a 'free lunch' mathematically speaking and I realise the discontinuity at x/0 is what messes things up in a way. Would anyone be able to suggest something?

Thanks!

by ArturoP at September 18, 2014 07:36 AM

StackOverflow

spray.can.Http$ConnectionException: Premature connection close

In my below test, I tried to simulate a timeout and then send a normal request. however, I got spray.can.Http$ConnectionException: Premature connection close (the server doesn't appear to support request pipelining)

class SprayCanTest extends ModuleTestKit("/SprayCanTest.conf") with FlatSpecLike with Matchers {

  import system.dispatcher

  var app = Actor.noSender

  protected override def beforeAll(): Unit = {
    super.beforeAll()
    app = system.actorOf(Props(new MockServer))
  }

  override protected def afterAll(): Unit = {
    system.stop(app)
    super.afterAll()
  }


  "response time out" should "work" in {
    val setup = Http.HostConnectorSetup("localhost", 9101, false)

    connect(setup).onComplete {
      case Success(conn) => {
        conn ! HttpRequest(HttpMethods.GET, "/timeout")
      }
    }

    expectMsgPF() {
      case Status.Failure(t) =>
        t shouldBe a[RequestTimeoutException]
    }


  }

  "normal http response" should "work" in {

    //Thread.sleep(5000)
    val setup = Http.HostConnectorSetup("localhost", 9101, false)

    connect(setup).onComplete {
      case Success(conn) => {
        conn ! HttpRequest(HttpMethods.GET, "/hello")
      }
    }

    expectMsgPF() {
      case HttpResponse(status, entity, _, _) =>
        status should be(StatusCodes.OK)
        entity should be(HttpEntity("Helloworld"))
    }
  }

  def connect(setup: HostConnectorSetup)(implicit system: ActorSystem) = {
    // for the actor 'asks'
    import system.dispatcher
    implicit val timeout: Timeout = Timeout(1 second)
    (IO(Http) ? setup) map {
      case Http.HostConnectorInfo(connector, _) => connector
    }
  }

  class MockServer extends Actor {
    //implicit val timeout: Timeout = 1.second
    implicit val system = context.system

    // Register connection service
    IO(Http) ! Http.Bind(self, interface = "localhost", port = 9101)

    def receive: Actor.Receive = {
      case _: Http.Connected => sender ! Http.Register(self)

      case HttpRequest(GET, Uri.Path("/timeout"), _, _, _) => {
        Thread.sleep(3000)
        sender ! HttpResponse(entity = HttpEntity("ok"))
      }

      case HttpRequest(GET, Uri.Path("/hello"), _, _, _) => {
        sender ! HttpResponse(entity = HttpEntity("Helloworld"))
      }
    }
  }


}

and My config for test:

spray {
  can {
    client {
      response-chunk-aggregation-limit = 0
      connecting-timeout = 1s
      request-timeout = 1s
    }
    host-connector {
      max-retries = 0
    }
  }
}

I found that in both cases, the "conn" object is the same. So I guess when RequestTimeoutException happens, spray put back the conn to the pool (by default 4?) and the next case will use the same conn but at this time, this conn is keep alive, so the server will treat it as chunked request.

If I put some sleep in the second case, it will just passed. So I guess I must close the conn when got RequestTimeoutException and make sure the second case use a fresh new connection, right?

How should I do? Any configurations?

Thanks

Leon

by anuni at September 18, 2014 07:30 AM

QuantOverflow

Scale of Market Quakes Computation

I would like to reproduce the results in the paper "The scale of market quakes", from T. Bisig, But I am getting stuck at the computation of the Fourier Coefficients in equation (4). They are defined as the magnitude of the Fourier frequency computed from the discretized omega(t). I know how to compute a Discrete Fourier Transform but which omega(t) should I use? It's also not clear why you need a DFT at all. Is it to filter the signal?

by LouisChiffre at September 18, 2014 07:28 AM

CompsciOverflow

Looking for an algorithm to solve a specific Vehicle Routing Problem

I am trying to figure out a way to create routes for trucks to complete a list of orders(drops/stops), while minimizing distance traveled.

  • There is only ever 1 company warehouse in the area.
  • The trucks have to deliver based on capacity.
  • Each truck can hold a maximum (usually 18 pallets).
  • Each order will be for a number underneath that maximum.
  • There will be a maximum number of trucks specified.
  • When trucks are finished with their route, they will return to the company warehouse.

I already have all of the orders, the pallets they are requesting, and the distance between each point.

I am an absolute simpleton when it comes to complex problems like these... I am hoping that someone has a simple (relatively) solution, or an article of some sort that could help me down my path.

by Primalpat at September 18, 2014 07:19 AM

StackOverflow

Insertion and Retrieval formats for MySql TIME type variable when passed as Json Objects

WE are using Play 2.3.x for our web application . We use REST API services to perform the CRUD operation on data objects.

The objects are passed in the form of JSON objects. Insertion and Retrieval formats for MySql TIME type variable when passed as Json Objects.In my scala class one of the field is defined as LocalTime(org.joda.time.LocalTime) data type as below :

class Monitors(tag:Tag)
 extends Table[Monitor](tag,"MONITOR"){
 ..
 ..
 def sunSchedule = column[Option[LocalTime]]("sunSchedule" , O.DBType("TIME"))
 ..
 }

The database used is MySql and the correspoding Data type is TIME for the above field in MySQL . I could successfully insert the Json object usiong Json objects as below :

{
  ..
  ..
  "sunSchedule" : "00:00:00",
  ..  
}

When i retrieve the stored object the output is in the below format :

{
  ..
  ..
  sunSchedule: "Some(00:00:00.000)",
  ..
}

I want the output to be printed in the same format as I input like,"00:00:00" but not "Some(00:00:00)" .

Can some one help me out with this

by bhavya at September 18, 2014 07:10 AM

TheoryOverflow

Bandwidth and vocal shaping algrothim related?

Is the class of algorithms for detecting bandwidth shaping applicable to the detection of vocal shaping (autotune)?

by caseyr547 at September 18, 2014 07:01 AM

Graph isomorphism with equivalence relation on the vertex set

A colored graph can be described as tuple $(G,c)$ where $G$ is a graph and $c : V(G) \rightarrow \mathbb{N}$ is the coloring. Two colored graphs $(G,c)$ and $(H,d)$ are said to be isomorphic if there exists an isomorphism $\pi : V(G) \rightarrow V(H)$ such that the coloring is obeyed, i.e. $c(v) = d(\pi(v))$ for all $v \in V(G)$.

This notion captures the isomorphism of colored graphs in a very strict sense. Consider the case where you have two political maps of the same region but they use different color sets. If one asks if they are colored in the same fashion one would assume this to mean whether there exists a bijective mapping between the two color sets such that the colors of both maps coincide via this mapping. This notion can be formalized by describing colored graphs as tuple $(G,\sim)$ where $\sim$ is an equivalence relation on the vertex set of $G$. We can then say two such graphs $(G,\sim_1)$ and $(H,\sim_2)$ are isomorphic if there exists an isomorphism $\pi : V(G) \rightarrow V(H)$ such that for all pairs $v_1,v_2 \in V(G)$ it holds that $$v_1 \sim_1 v_2 \text{ iff } \pi(v_1) \sim_2 \pi(v_2)$$

My question is whether this concept has been studied previously w.r.t. finding canonical forms etc. and if so under what name it is known?

by user17410 at September 18, 2014 06:54 AM

QuantOverflow

Predict Futures Prices based on weather + agricultural data

I’m working in the area of Data Mining and have come up with the following idea for my Masters project.The text may not be the best structured but it’s a working draft to give you a quick idea.

Basic Hypothesis,

  • Can a combination of (Weather Data + Agricultural Data + Social Media (twitter, etc) data + other relevant data) be used to aid an investor to buy futures of product / commodity ‘X’
  • I plan to focus on testing the hypothesis on ‘Soy Bean Futures’. The core idea is to test the approach, even if I fail, its fine. My method / approach must be correct.

Target,

  • potential target audience could be investor, govt agencies or agricultural industries

Method,

  • Focus on Soy Bean Futures in USA (Worlds largest Soy producer) to narrow down my problem scope
  • more specifically on State of Illinois (leads Soy production in US) to zoom in even further

Technique,

  • Understand how the model for pricing of Futures works
  • Find Historical trading data on Soy Futures in Illinois [from Quandl?] I still don’t know how I will match Soy Future trading data to where it was produced so that’s an issue, I think
  • Weather [temp, humidity, sea pressure, etc] & Agricultural [yields, farm sizes etc] data is easy to get & analyze
  • Do some number crunching / data mining to test “IF Weather in Illinois affects Soy yields/ production which is turn affects Soy Futures prices” ; I still need to refine the technique but its a rough idea

Your Input,

  • what do you think about the whole idea? totally nuts? not realistic? I need to be Math God to figure this out?
  • if you think, that this is even remotely feasible, what are my must do’s, must NOT do’s?
  • is my technique fully flawed? what am I missing? under-estimating? How can I improve my technique?

by user3491422 at September 18, 2014 06:07 AM

StackOverflow

What is the purpose of *> and <* in Scalaz

Let's take a look at the implementation of finish on a Scalaz Task

def onFinish(f: Option[Throwable] => Task[Unit]): Task[A] =
    new Task(get flatMap {
        case -\/(e) => f(Some(e)).get *> Future.now(-\/(e))
        case r => f(None).get *> Future.now(r)
    })

What is the *> accomplishing here?

by UndercoverAgent at September 18, 2014 06:05 AM

Halfbakery

QuantOverflow

What data should be used for regression-based model backtesting?

I ran regressions using historical valuation data and now want to backtest the models I came up with.

Are there any issues with using the same historical data set for the backtest that I need to be aware of?

by Andrei at September 18, 2014 05:48 AM

Two assets with the same mean and standard deviation - Would there be any benefit? [on hold]

If I have two assets in a portfolio with the same standard deviation and mean and the correlation between the assets is 0, theoretically could there be a situation where it would be beneficial to having a portfolio like this?

by user12091 at September 18, 2014 05:20 AM

StackOverflow

How to serialize a Map[CustomType, String] to JSON

Given the following Enumeration...

object MyEnum extends Enumeration {

  type MyEnum = Value

  val Val1 = Value("val1")
  val Val2 = Value("val2")
  val Val3 = Value("val3")
} 

import MyEnum._

... and the following Map...

val m = Map(
  val1 -> "one",
  val2 -> "two",
  val3 -> "three"
)

... I need to transform m to JSON:

import play.api.libs.json._

val js = Json.toJson(m)

The last statement doesn't compile because the compiler doesn't find a Json serializer for type scala.collection.immutable.Map[MyEnum.Value,String].

Question: Since Play does provide a serializer for type scala.collection.immutable.Map[String,String], and my enumeration actually contains strings, is there a way to reuse the default JSON serializer?

by j3d at September 18, 2014 05:14 AM

/r/compsci

Is software becoming too bloated, and will the potential end of Moore's law force us to again look at software efficiency with more urgency?

As I sit here waiting longer than ever for the latest version of the iOS simulator (version 8.0) to load, I'm forced to think about whether the trend towards bloated operating systems and software frameworks will eventually catch up with us.

With each new release of Microsoft and Apple operating systems, more and more bloat-ware is "required" and more tasks of various sorts become hard wired into the OS. Some of these systems are valuable (such as garbage collection ... which arguably could also be part of the problem despite it's benefits), but more often we are starting to see support for unused hardware features, monitors, cloud synchronization and other niceties which while "cool", are increasing the overhead of even the simplest of applications.

Additionally in web development, we are seeing new languages, and frameworks being created at a breakneck pace. It seems like every few weeks there is a new javascript framework (jQuery mobile, angular.js, etc.) or programming language (C#, Java, ruby, scala, wolfram language) that while useful in a wide array of scenarios, have introduced overhead in load times, dependancies, and the inclusion off all sorts of potentially useful functionality that we often only need to use single piece of.

So, some questions for discussion:

Will this mindset of building up massive piles of "core" functionalities eventually come crashing down when we one day reach the atomic limit of how fast we can make our machines? Or ..

Will distributed computing and quantum algorithms make the infinite computing power we expect from Moore's law continue on forever?

Will we reach a point soon where developers will be forced by slowing of cpu power/cost to focus more on efficiency of code, like in the early days of programming and reduce dependency on frameworks? Or ...

Is the rapid development of new feature rich languages, frameworks and bulky operating systems ultimately leading to MORE efficient software and usage of hardware resources?

submitted by NyPoster
[link] [27 comments]

September 18, 2014 04:58 AM

StackOverflow

How to flatten a tuple, as flatMap is not defined over tuples?

For an example, how to convert ((1,"one"),2) to (1,"one",2)?

I tried to use flatMap, but flatMap is not defined on tuple

scala> val a= ((1,"one"),2)
a: ((Int, String), Int) = ((1,one),2)

by Daniel Wu at September 18, 2014 04:48 AM

How to do setup/teardown in specs2 when using "in new WithApplication"

I am using Specs2 with play 2.2.1 built with Scala 2.10.2 (running Java 1.7.0_51). I have been reading about how to do setup/teardown with Specs2. I have seen examples using the "After" trait as follows:

class Specs2Play extends org.specs2.mutable.Specification {
  "this is the first example" in new SetupAndTeardownPasswordAccount {
    println("testing")
  }
}

trait SetupAndTeardownPasswordAccount extends org.specs2.mutable.After {
  println("setup")

  def after  = println("teardown ")
}

This works fine, except that all of my tests are using "in new WithApplication". It seems what I need is to have an object which is both a "WithApplication" and an "After". Below does not compile, but is essentially what I want:

trait SetupAndTeardownPasswordAccount extends org.specs2.mutable.After with WithApplication

So, my question is, how do I add setup/teardown to my tests which are already using "in WithApplication"? My primary concern is that all of our tests make use of fake routing like this (so they need the With Application).

val aFakeRequest = FakeRequest(method, url).withHeaders(headers).withBody(jsonBody)
val Some(result) = play.api.test.Helpers.route(aFakeRequest)
result

by user1483903 at September 18, 2014 04:11 AM

Dave Winer

Portland Pattern Repository

/r/compsci

Wes Felter

"The promise of these platforms is that you can create new assets and applications on top of them...."

“The promise of these platforms is that you can create new assets and applications on top of them. But the question that’s almost never asked is, are any of these assets or applications worth owning or using?”

- Jimmy Song

September 18, 2014 03:23 AM

"Why would it be awesome? Do you even know what the Internet of Things is? Or do you just assume that..."

“Why would it be awesome? Do you even know what the Internet of Things is? Or do you just assume that more people using the blockchain is good no matter what they are doing with it?”

- PrivateBooty

September 18, 2014 03:20 AM

TheoryOverflow

Is P equal to the intersection of all superpolynomial time classes?

Let us call a function $f(n)$ superpolynomial if $\lim_{n\rightarrow\infty} n^c/f(n)=0$ holds for every $c>0$.

It is clear that for any language $L\in {\mathsf P}$ it holds that $L\in {\mathsf {DTIME}}(f(n))$ for every superpolynomial time bound $f(n)$. I wonder, wether the converse of this statement is also true? That is, if we know $L\in {\mathsf {DTIME}}(f(n))$ for every superpolynomial time bound $f(n)$, does it imply $L\in {\mathsf P}$? In other words, is it true that $${\mathsf P} = \cap_f {\mathsf {DTIME}}(f(n))$$ where the intersection is taken over every superpolinomial $f(n)$.

by Andras Farago at September 18, 2014 03:10 AM

CompsciOverflow

Average hard disk transfer time [on hold]

So on my homework has this question and I'm not quite sure how to do it.. anyone wanna help me out? It's due tonight! You have a hard disk that has 512bytes/sector, 1024 sectors/track, 20 tracks, and a rotation rate of 7200 RPM. The track-to-track seek time is 2 msec. Compute the average transfer time for reading 512 bytes of data from the disk.

by user21865 at September 18, 2014 03:06 AM

Finding a subset in bipartite graph violating Hall's condition

We are given a bipartite graph of $n \leq 200$ vertices in both the first and the second partite set. Let $U$ be some set of vertices in the first set, and $V$ those vertices from the second that are conected to some of the vertices from $U$. If for every choice of $U$ we have $|U| \leq |V|$ (Hall's condition) then there exists a perfect matching (Hall's theorem).

But in our graph we know there is no such matching. That means there exists some set of vertices $U$ violating Hall's condition, and I'd like to find it with complexity around $O(n^3)$ - hints instead of full solutions are most welcome.

What I already tried was finding the maximum matching first, and then searching for our subset, but I couldn't figure out how to do this. Also, I thought of ways of reducing this problem to some max-flow (as you do with the maximum matching) but it also seemed to me like a dead end.

by Cris at September 18, 2014 02:51 AM

StackOverflow

Using a custom enum in the Scala Worksheet I am receiving an error: java.lang.ExceptionInInitializerError

I am working with a fresh install of the TypeSafe IDE (Eclipse with ScalaIDE pre-installed). I'm on Windows 7-64bit. And I have had mixed success with the Scala Worksheet. It has already hard crashed my machine (to a full reset or once to the blue screen of death) three times in less than an hour. So, this may be a bug in the Scala Worksheet. I'm not sure yet and don't have time to chase down that issue. However, this enum issue is stopping me from testing.

I am using the following code in the Scala Worksheet:

package test

import com.stack_overflow.Enum

object WsTempA {
  object Value extends Enum {
    sealed abstract class Val extends EnumVal
    case object Empty   extends Val; Empty()
    case object Player1 extends Val; Player1()
    case object Player2 extends Val; Player2()
  }

  println(Value.values)
  println(Value.Empty)
}

The above works fine. However, if you comment out the first println, the second line throws an exception: java.lang.ExceptionInInitializerError. And I am just enough of a Scala newbie to not understand why it's occurring. Any help would be deeply appreciated.

Here's the stack trace from the right side of the Scala Worksheet (left side stripped to display nicely here):

java.lang.ExceptionInInitializerError
    at test.WsTempA$Value$Val.<init>(test.WsTempA.scala:7)
    at test.WsTempA$Value$Empty$.<init>(test.WsTempA.scala:8)
    at test.WsTempA$Value$Empty$.<clinit>(test.WsTempA.scala)
    at test.WsTempA$$anonfun$main$1.apply$mcV$sp(test.WsTempA.scala:14)
    at org.scalaide.worksheet.runtime.library.WorksheetSupport$$anonfun$$exe
 cute$1.apply$mcV$sp(WorksheetSupport.scala:76)
    at org.scalaide.worksheet.runtime.library.WorksheetSupport$.redirected(W
 orksheetSupport.scala:65)
    at org.scalaide.worksheet.runtime.library.WorksheetSupport$.$execute(Wor
 ksheetSupport.scala:75)
    at test.WsTempA$.main(test.WsTempA.scala:11)
    at test.WsTempA.main(test.WsTempA.scala)
 Caused by: java.lang.NullPointerException
    at test.WsTempA$Value$.<init>(test.WsTempA.scala:8)
    at test.WsTempA$Value$.<clinit>(test.WsTempA.scala)
    ... 9 more

The class com.stack_overflow.Enum comes from this StackOverflow thread. I have pasted in my version here for simplicity (in case I missed something critical during the copy/paste operation):

package com.stack_overflow

//Copied from http://stackoverflow.com/a/8620085/501113
abstract class Enum {

  type Val <: EnumVal

  protected var nextId: Int = 0

  private var values_       =       List[Val]()
  private var valuesById_   = Map[Int   ,Val]()
  private var valuesByName_ = Map[String,Val]()

  def values       = values_
  def valuesById   = valuesById_
  def valuesByName = valuesByName_

  def apply( id  : Int    ) = valuesById  .get(id  )  // Some|None
  def apply( name: String ) = valuesByName.get(name)  // Some|None

  // Base class for enum values; it registers the value with the Enum.
  protected abstract class EnumVal extends Ordered[Val] {
    val theVal = this.asInstanceOf[Val]  // only extend EnumVal to Val
    val id = nextId
    def bumpId { nextId += 1 }
    def compare( that:Val ) = this.id - that.id
    def apply() {
      if ( valuesById_.get(id) != None )
        throw new Exception( "cannot init " + this + " enum value twice" )
      bumpId
      values_ ++= List(theVal)
      valuesById_   += ( id       -> theVal )
      valuesByName_ += ( toString -> theVal )
    }
  }
}

Any sort of guidance would be greatly appreciated.


UPDATE - 2014/Sep/17

It turns out that even the solution in the prior update (from 2013/Feb/19) fails to work if one places println(Value.Player2) as the first command; i.e. the ordinals are assigned incorrectly.

I have since created a verifiable working solution as a Gist. The implementation waits to assign the ordinals until after all JVM class/object initialization completes. It also facilitates extending/decorating each enumeration member with additional data.

UPDATE - 2013/Feb/19

After several cycles with Rex Kerr, here is the updated versions of the code that now works:

package test

import com.stack_overflow.Enum

object WsTempA {
  object Value extends Enum {
    sealed abstract class Val extends EnumVal
    case object Empty   extends Val {Empty.init}   // <---changed from ...Val; Empty()
    case object Player1 extends Val {Player1.init} // <---changed from ...Val; Player1()
    case object Player2 extends Val {Player2.init} // <---changed from ...Val; Player2()
    private val init: List[Value.Val] = List(Empty, Player1, Player2) // <---added
  }

  println(Value.values)
  println(Value.Empty)
  println(Value.values)
  println(Value.Player1)
  println(Value.values)
  println(Value.Player2)
  println(Value.values)

package com.stack_overflow

//Copied from http://stackoverflow.com/a/8620085/501113
abstract class Enum {

  type Val <: EnumVal

  protected var nextId: Int = 0

  private var values_       =       List[Val]()
  private var valuesById_   = Map[Int   ,Val]()
  private var valuesByName_ = Map[String,Val]()

  def values       = values_
  def valuesById   = valuesById_
  def valuesByName = valuesByName_

  def apply( id  : Int    ) = valuesById  .get(id  )  // Some|None
  def apply( name: String ) = valuesByName.get(name)  // Some|None

  // Base class for enum values; it registers the value with the Enum.
  protected abstract class EnumVal extends Ordered[Val] {
    val theVal = this.asInstanceOf[Val]  // only extend EnumVal to Val
    val id = nextId
    def bumpId { nextId += 1 }
    def compare(that: Val ) = this.id - that.id
    def init() {   // <--------------------------changed name from apply
      if ( valuesById_.get(id) != None )
        throw new Exception( "cannot init " + this + " enum value twice" )
      bumpId
      values_ ++= List(theVal)
      valuesById_   += ( id       -> theVal )
      valuesByName_ += ( toString -> theVal )
    }
  }
}

by chaotic3quilibrium at September 18, 2014 02:35 AM

"What part of Milner-Hindley do you not understand?"

I can't find it now, but I swear there used to be a T-shirt for sale featuring the immortal words:


What part of

Milney Hindley

do you not understand?


In my case, the answer would be... all of it!

In particular, I often see notation like this in Haskell papers, but I have no clue what the hell any of it means. I have no idea what branch of mathematics it's supposed to be.

I recognise the letters of the Greek alphabet of course, and symbols such as "∉" (which usually means that something is not an element of a set).

On the other hand, I've never seen "⊢" before (Wikipedia claims it might mean "partition"). I'm also unfamiliar with the use of the vinculum here. (Usually it denotes a fraction, but that does not appear to be the case here.)

I imagine SO is not a good place to be explaining the entire Milner Hindley algorithm. But if somebody could at least tell me where to start looking to comprehend what this sea of symbols means, that would be helpful. (I'm sure I can't be the only person who's wondering...)

by MathematicalOrchid at September 18, 2014 02:25 AM

TheoryOverflow

random number generation on genetic algorithms

I'm implementing a genetic algorithm using Java programming language. As you know, there are some random events in the algorithm like roullete selection, crossover, mutation, etc. In order to generate a better probability distribution among these events, which approach should be better, to use a unique Random object or create a separate Random object for each event ?

by rodolfo.mendes at September 18, 2014 02:23 AM

StackOverflow

JDT weaving is currently disabled

I just installed Eclipse standard 4.4 Luna, and after installing the Scala IDE and friends I get

JDT Weaving is currently disabled. The Scala IDE needs JDT Weaving to be active,
or it will not work correctly. 

Activate JDT Weaving and Restart Eclipse? (Highly Recommended)

[OK] [Cancel]

Does anyone know how to do this?

Now my comments on this error message

  • In general error messages that tell you what to do, but not how to do it are frustrating.
  • The [OK] button implies that the dialog will enable it for me, but it does exactly the same as clicking the [Cancel] button. Consequently, the UI design is defective.
  • The preferences dialog in Luna does not show anything under JDT or Weaving.
  • The help search in Luna for "JTD Weaving" returns too much information to offer any simple solution.
  • My search via Google turns up interesting discussion on the problem, but fails to simply state the solution, or if there is one.

https://groups.google.com/forum/#!msg/scala-ide-user/7GdTuQHyP4Q/aiUt70lnzigJ

by Eric Kolotyluk at September 18, 2014 01:54 AM

what is the difference in Java 8 Stream.parallel and Scala trait Parallelizable.par

Currently, I only know both Java 8 and Scala support parallel operation on the collection, I wrote several examples to play with it, but I am not sure what is the difference of it in term of performance, implementation technique and so on.

When I search it, I didnt get sufficient materials about it

Can someone share some experience about it ?

by Cloud tech at September 18, 2014 01:49 AM

/r/emacs

2008 blog post by Steve Yegge - mostly about XEmacs - makes some interesting points about Emacs vs IDEs in the sections "The dubious future of Emacs" and "The bad news: the competition isn't the IDEs".

September 18, 2014 01:49 AM

arXiv Networking and Internet Architecture

TCP Performance for Kurd Messenger Application Using Bio-computing. (arXiv:1409.5054v1 [cs.NI])

This work was conducted to design, implement, and evaluate a new model of measuring Transmission Control Protocol (TCP) performance of real time network. The proposed model Biological Kurd Messenger (BIOKM) has two main goals: First is to run the model efficiently, second is to obtain high TCP performance via real time network using bio-computing technique, especially molecular calculation because it provides wisdom results and it can exploit all facilities of phylogentic analysis. To measure TCP performance two protocols were selected Internet Relay Chat Daemon (IRCD) and File Transfer Protocol (FTP), the BIOKM model consists of two applications Kurd Messenger Server Side (KMSS) and Kurd Messenger Client Side (KMCS) written in Java programming language by implementing algorithms of BIOKM Server and Client application. The paper also includes the implementation of hybridized model algorithm based on Neighbor-Joining (NJ) method to measure TCP performance, then implementing algorithm of Little law (steady state) for single server queue as a comparison with bio-computing algorithm. The results obtained by using bio-computing and little law techniques show very good performance and the two techniques result are very close to each other this is because of local implementation. The main tools which have been used in this work can be divided into software and hardware tools.

Keywords: Biological Kurd Messenger (BIOKM), Kurd Messenger Phylogenetic tree, Hybridized Model, Little Law, TCP Performance.

by <a href="http://arxiv.org/find/cs/1/au:+Daham_B/0/1/0/all/0/1">Bnar Faisal Daham</a>, <a href="http://arxiv.org/find/cs/1/au:+Ismaeel_A/0/1/0/all/0/1">Ayad Ghany Ismaeel</a>, <a href="http://arxiv.org/find/cs/1/au:+Abdual_Rahman_S/0/1/0/all/0/1">Suha A. Abdual-Rahman</a> at September 18, 2014 01:30 AM

Discrete Transfinite. (arXiv:1409.5052v1 [math.LO])

We describe various computational models based initially, but not exclusively, on that of the Turing machine, that are generalized to allow for transfinitely many computational steps. Variants of such machines are considered that have longer tapes than the standard model, or that work on ordinals rather than numbers. We outline the connections between such models and the older theories of recursion in higher types, generalized recursion theory, and recursion on ordinals such as $\alpha$-recursion. We conclude that, in particular, polynomial time computation on $\omega$-strings is well modelled by several convergent conceptions.

by <a href="http://arxiv.org/find/math/1/au:+Welch_P/0/1/0/all/0/1">Philip Welch</a> at September 18, 2014 01:30 AM

Decidability Problems for Actor Systems. (arXiv:1409.5022v1 [cs.PL])

We introduce a nominal actor-based language and study its expressive power. We have identified the presence/absence of fields as a crucial feature: the dynamic creation of names in combination with fields gives rise to Turing completeness. On the other hand, restricting to stateless actors gives rise to systems for which properties such as termination are decidable. This decidability result still holds for actors with states when the number of actors is bounded and the state is read-only.

by <a href="http://arxiv.org/find/cs/1/au:+Boer_F/0/1/0/all/0/1">F.S. de Boer</a>, <a href="http://arxiv.org/find/cs/1/au:+Jaghoori_M/0/1/0/all/0/1">M.M. Jaghoori</a>, <a href="http://arxiv.org/find/cs/1/au:+Laneve_C/0/1/0/all/0/1">C. Laneve</a>, <a href="http://arxiv.org/find/cs/1/au:+Zavattaro_G/0/1/0/all/0/1">G. Zavattaro</a> at September 18, 2014 01:30 AM

CryptGraph: Privacy Preserving Graph Analytics on Encrypted Graph. (arXiv:1409.5021v1 [cs.CR])

Many graph mining and analysis services have been deployed on the cloud, which can alleviate users from the burden of implementing and maintaining graph algorithms. However, putting graph analytics on the cloud can invade users' privacy. To solve this problem, we propose CryptGraph, which runs graph analytics on encrypted graph to preserve the privacy of both users' graph data and the analytic results. In CryptGraph, users encrypt their graphs before uploading them to the cloud. Cloud runs graph analysis on the encrypted graphs and obtains results which are also in encrypted form that the cloud cannot decipher. The encrypted results are sent back to users and users do the decryption to get the plaintext results. In this process, users' graphs and the analytics results are both encrypted and the cloud knows neither of them. Thereby, users' privacy can be strongly protected. Meanwhile, with the help of homomorphic encryption, the results analyzed from the encrypted graphs are guaranteed to be correct. In this paper, we present how to encrypt a graph using homomorphic encryption and how to query the structure of an encrypted graph by computing polynomials. To solve the problem that certain operations are not executable on encrypted graph, we propose hard computation outsourcing to seek help from users. Using two graph algorithms as examples, we show how to apply our methods to perform analytics on encrypted graphs. Experiments on two datasets demonstrate the correctness and feasibility of our methods.

by <a href="http://arxiv.org/find/cs/1/au:+Xie_P/0/1/0/all/0/1">Pengtao Xie</a>, <a href="http://arxiv.org/find/cs/1/au:+Xing_E/0/1/0/all/0/1">Eric Xing</a> at September 18, 2014 01:30 AM

An Agent-Based Algorithm exploiting Multiple Local Dissimilarities for Clusters Mining and Knowledge Discovery. (arXiv:1409.4988v1 [cs.LG])

We propose a multi-agent algorithm able to automatically discover relevant regularities in a given dataset, determining at the same time the set of configurations of the adopted parametric dissimilarity measure yielding compact and separated clusters. Each agent operates independently by performing a Markovian random walk on a suitable weighted graph representation of the input dataset. Such a weighted graph representation is induced by the specific parameter configuration of the dissimilarity measure adopted by the agent, which searches and takes decisions autonomously for one cluster at a time. Results show that the algorithm is able to discover parameter configurations that yield a consistent and interpretable collection of clusters. Moreover, we demonstrate that our algorithm shows comparable performances with other similar state-of-the-art algorithms when facing specific clustering problems.

by <a href="http://arxiv.org/find/cs/1/au:+Bianchi_F/0/1/0/all/0/1">Filippo Maria Bianchi</a>, <a href="http://arxiv.org/find/cs/1/au:+Maiorino_E/0/1/0/all/0/1">Enrico Maiorino</a>, <a href="http://arxiv.org/find/cs/1/au:+Livi_L/0/1/0/all/0/1">Lorenzo Livi</a>, <a href="http://arxiv.org/find/cs/1/au:+Rizzi_A/0/1/0/all/0/1">Antonello Rizzi</a>, <a href="http://arxiv.org/find/cs/1/au:+Sadeghian_A/0/1/0/all/0/1">Alireza Sadeghian</a> at September 18, 2014 01:30 AM

Second-Order SAT Solving using Program Synthesis. (arXiv:1409.4925v1 [cs.LO])

Program synthesis is the automated construction of software from a specification. While program synthesis is undecidable in general, we show that synthesising finite-state programs is NEXPTIME-complete. We then present a fully automatic, sound and complete algorithm for synthesising C programs from a specification written in C. Our approach uses a combination of bounded model checking, explicit state model checking and genetic programming to achieve surprisingly good performance for a problem with such high complexity.

By identifying a correspondence between program synthesis and second-order logic, we show how to use our program synthesiser as a decision procedure for existential second-order logic over finite domains. We illustrate the expressiveness of this logic by encoding several program analysis problems including superoptimisation, de-obfusaction, safety and termination. Finally, we present experimental results showing that our approach is tractable in practice.

by <a href="http://arxiv.org/find/cs/1/au:+Kroening_D/0/1/0/all/0/1">Daniel Kroening</a>, <a href="http://arxiv.org/find/cs/1/au:+Lewis_M/0/1/0/all/0/1">Matt Lewis</a> at September 18, 2014 01:30 AM

Can Market Risk Perception Drive Inefficient Prices? Theory and Evidence. (arXiv:1409.4890v1 [q-fin.GN])

This work presents an asset pricing model that under rational expectation equilibrium perspective shows how, depending on risk aversion and noise volatility, a risky-asset has one equilibrium price that differs in term of efficiency: an informational efficient one (similar to Campbell and Kyle (1993)), and another one where price diverges from its informational efficient level. The former Pareto dominates (is dominated by) the latter in presence of low (high) market risk perception. The estimates of the model using S&P 500 Index support the theoretical findings, and the estimated inefficient equilibrium price captures the higher risk premium and higher volatility observed during the Dot.com bubble 1995--2000.

by <a href="http://arxiv.org/find/q-fin/1/au:+Formenti_M/0/1/0/all/0/1">Matteo Formenti</a> at September 18, 2014 01:30 AM

Cryptanalyzing an image encryption algorithm based on scrambling and Veginere cipher. (arXiv:1409.4845v1 [cs.CR])

Recently, an image encryption algorithm based on scrambling and Vegin`ere cipher has been proposed. However, it was soon cryptanalyzed by Zhang et al. using a combination of chosen-plaintext attack and differential attack. This paper briefly reviews the two attack methods proposed by Zhang et al. and outlines the mathematical interpretations of them. Based on their work, we present an improved chosen-plaintext attack to further reduce the number of chosen-plaintexts required, which is proved to be optimal. Moreover, it is found that an elaborately designed known-plaintex attack can efficiently compromise the image cipher under study. This finding is verified by both mathematical analysis and numerical simulations. The cryptanalyzing techniques described in this paper may provide some insights for designing secure and efficient multimedia ciphers.

by <a href="http://arxiv.org/find/cs/1/au:+Zeng_L/0/1/0/all/0/1">Li Zeng</a>, <a href="http://arxiv.org/find/cs/1/au:+Liu_R/0/1/0/all/0/1">Renren Liu</a>, <a href="http://arxiv.org/find/cs/1/au:+Zhang_L/0/1/0/all/0/1">Leo Yu Zhang</a>, <a href="http://arxiv.org/find/cs/1/au:+Liu_Y/0/1/0/all/0/1">Yuansheng Liu</a>, <a href="http://arxiv.org/find/cs/1/au:+Wong_K/0/1/0/all/0/1">Kwok-Wo Wong</a> at September 18, 2014 01:30 AM

TheoryOverflow

Tuning Parameters of Locality Sensitive Hashing

We have given a set of $n$ binary vectors each of dimension $d$, i.e. a binary matrix of $d*n$. Our goal is to group vectors which are almost similar, $\forall v_i, v_j\in\{0,1\}^d$, we say $v_i$ and $v_j$ are almost similar, if $HammingDistance(v_i,v_j)<\epsilon$. Since comparing two $d$ bit vectors are expensive, and $n$ is large, we can't afford to do $O(n^2)$ comparisons, we use Locality Sensitive Hashing algorithm to computer similarity preserving matrix in subquadratic time.

Now, my question is that how to tune parameters of LSH, i.e. # of hash tables $m$, # of entries per hash table $l$, such that error is minimizes (the number of similar items which hashes to different bucket, and the number of dissimilar items which hashes to same bucket is minimizes).

Pls let me know if I am missing something obvious.

by Ram at September 18, 2014 01:28 AM

/r/compsci

CompsciOverflow

Betweenness Centrality measurement in Undirected Graphs

I'm working with graphs of a very large size (> 60k vertices), and want to speed up B.C. measurements. It is defined here: http://en.wikipedia.org/wiki/Betweenness_centrality The algorithm that I am using to compute B.C. is Brandes' algorithm: http://www.inf.uni-konstanz.de/algo/publications/b-fabc-01.pdf

For undirected graphs, would running the Betweenness centrality algorithm on each of its connected components (and then combine the results) give the exact same answer as computing on the whole graph at once? I would think so (but don't have a proof) because different connected components are not related.

by Ryan at September 18, 2014 01:16 AM

TheoryOverflow

Applications of stochastic computing

recently I have stumbled upon the word "stochastic computing".

Can someone tell me what are some of the interesting applications of stochastic computing in real world?

Or, is there any particular type of problems that are solved using stochastic computing?

by Kishore Debnath at September 18, 2014 01:15 AM

CompsciOverflow

importance of graph theory in software testing [on hold]

I know that it is important part of the software testing and computer science but I am not able to find any data related to my question on googling.

how is the graph theory related to software testing ? what are the application of graph theory in the software testing?

by abhay222 at September 18, 2014 01:05 AM

Stable partition algorithm for Quicksort in linear time

I ran through this question in my exam. Was not able to solve it

Write a pseudo-code for stable partition algorithm for Quick Sort in an array of n elements in linear time. Select the first element as pivot. Here is an example for stable partition.

Input: 13, 14, 15, 16, 17, 12, 11, 10, 9

Output: 12, 11, 10, 9, 13, 14, 15, 16, 17

I know how to partition in linear time but not stable partition. I managed to do stable partition in $n^2$ time, which was just brute force. What I did was proceed with 2 pointers and traverse the array as usual. If I get any element more than pivot, my first pointer moves forward. If I get any element less than pivot, then all the larger elements are shifted to the next places and the smaller element is brought to the second pointer. This is a slight modification in the unstable partition in Quick Sort. But I am not able to figure out what to do in case of stable partition. Other than using another array, which is not allowed because Quick Sort is an in-place algorithm.

by rockydgeekgod at September 18, 2014 01:02 AM

Lobsters

/r/clojure

disable obfuscation for clojurescript variable?

I have a question about clojurescript obfuscation. I am writing a function in clojurescript that checks for the existance of a cookie then launches a bootstrap modal window if the cookie doesn't exist. All is going well except that the .modal attribute is being obfuscated.

Here is the code I would like to generate:

$(a).modal("show"); 

Here is the code that is actually generated:

$(a).d("show"); 

Here is an example of the code I am using:

(defn start [] (let [modal (.getElementById js/document "myModal")] (if-not (.containsKey goog.net.cookies "myCookie") (^:export .modal (js/$ modal) "show")))) 

I get the following error in the firefox console when i run the above code:

TypeError: $(...).d is not a function 

My assumption is that if .d were to compile to .modal, it would be recognized as a function.

I am using the :advanced option for cljs compilation.

I would like to avoid using external libraries if possible since this is such a small feature.

any thoughts?

submitted by tgallant
[link] [2 comments]

September 18, 2014 12:56 AM

CompsciOverflow

Using the Chomsky-Schutzenberger theorem to prove a language is not context-free?

The Chomsky-Schutzenberger theorem states that a language $L$ is context-free iff there is a homomorphism $h$, a regular language $R$, and a paired alphabet $\Sigma = T \cup \overline{T}$ such that $L = h(D_\Sigma \cap R)$, where $D\Sigma$ is the Dyck language over $\Sigma$. This is a necessary and sufficient condition for a language to be context-free, so in principle it seems like it should be possible to show that a language is not context-free by showing that there are no valid choices for $h$, $\Sigma$, and $R$ satisfying the theorem.

This earlier question talks about approaches for showing that a language is not context-free, but doesn't mention this approach. Additionally, I can't seem to find any constructive examples of proofs of non-context-freeness along these lines.

Are there any known examples of languages that have been shown not to be context-free by means of the Chomsky-Schutzenberger theorem?

Thanks!

by templatetypedef at September 18, 2014 12:45 AM

Planet Theory

RoBuSt: A Crash-Failure-Resistant Distributed Storage System

Authors: Christian Scheideler, Martina Eikel, Alexander Setzer
Download: PDF
Abstract: In this work we present the first distributed storage system that is provably robust against crash failures issued by an adaptive adversary, i.e., for each batch of requests the adversary can decide based on the entire system state which servers will be unavailable for that batch of requests.

Despite up to $\gamma n^{1/\log\log n}$ crashed servers, with $\gamma>0$ constant and $n$ denoting the number of servers, our system can correctly process any batch of lookup and write requests (with at most a polylogarithmic number of requests issued at each non-crashed server) in at most a polylogarithmic number of communication rounds, with at most polylogarithmic time and work at each server and only a logarithmic storage overhead.

Our system is based on previous work by Eikel and Scheideler (SPAA 2013), who presented IRIS, a distributed information system that is provably robust against the same kind of crash failures. However, IRIS is only able to serve lookup requests.

Handling both lookup and write requests has turned out to require major changes in the design of IRIS.

September 18, 2014 12:42 AM

AMS-Sampling in Distributed Monitoring, with Applications to Entropy

Authors: Jiecao Chen, Qin Zhang
Download: PDF
Abstract: Modern data management systems often need to deal with massive, dynamic and inherently distributed data sources: we collect the data using a distributed network, and at the same time try to maintain a global view of the data at a central coordinator. Such applications have been captured by the distributed monitoring model, which has attracted a lot of attention recently in both theory and database communities. However, all proposed algorithms in distributed monitoring with provable guarantees are ad-hoc in nature, each being designed for a specific problem. In this paper we propose the first generic algorithmic approach, by adapting the celebrated AMS-sampling framework from the streaming model to distributed monitoring. We also show how to use this framework to monitor entropy functions. Our results significantly improve the previous best results by Arackaparambil et al. [2] for entropy monitoring.

September 18, 2014 12:42 AM

Rank Maximal Matchings -- Structure and Algorithms

Authors: Pratik Ghoshal, Meghana Nasre, Prajakta Nimbhorkar
Download: PDF
Abstract: Let G = (A U P, E) be a bipartite graph where A denotes a set of agents, P denotes a set of posts and ranks on the edges denote preferences of the agents over posts. A matching M in G is rank-maximal if it matches the maximum number of applicants to their top-rank post, subject to this, the maximum number of applicants to their second rank post and so on.

In this paper, we develop a switching graph characterization of rank-maximal matchings, which is a useful tool that encodes all rank-maximal matchings in an instance. The characterization leads to simple and efficient algorithms for several interesting problems. In particular, we give an efficient algorithm to compute the set of rank-maximal pairs in an instance. We show that the problem of counting the number of rank-maximal matchings is #P-Complete and also give an FPRAS for the problem. Finally, we consider the problem of deciding whether a rank-maximal matching is popular among all the rank-maximal matchings in a given instance, and give an efficient algorithm for the problem.

September 18, 2014 12:41 AM

Identifying sparse and dense sub-graphs in large graphs with a fast algorithm

Authors: Vincenzo Fioriti, Marta Chinnici
Download: PDF
Abstract: Identifying the nodes of small sub-graphs with no a priori information is a hard problem. In this work, we want to find each node of a sparse sub-graph embedded in both dynamic and static background graphs, of larger average degree. We show that exploiting the summability over several background realizations of the Estrada-Benzi communicability and the Krylov approximation of the matrix exponential, it is possible to recover the sub-graph with a fast algorithm with computational complexity O(N n). Relaxing the problem to complete sub-graphs, the same performance is obtained with a single background. The worst case complexity for the single background is O(n log(n)).

September 18, 2014 12:41 AM

Finding Even Subgraphs Even Faster

Authors: Prachi Goyal, Pranabendu Misra, Fahad Panolan, Geevarghese Philip, Saket Saurabh
Download: PDF
Abstract: Problems of the following kind have been the focus of much recent research in the realm of parameterized complexity: Given an input graph (digraph) on $n$ vertices and a positive integer parameter $k$, find if there exist $k$ edges (arcs) whose deletion results in a graph that satisfies some specified parity constraints. In particular, when the objective is to obtain a connected graph in which all the vertices have even degrees---where the resulting graph is \emph{Eulerian}---the problem is called Undirected Eulerian Edge Deletion. The corresponding problem in digraphs where the resulting graph should be strongly connected and every vertex should have the same in-degree as its out-degree is called Directed Eulerian Edge Deletion. Cygan et al. [\emph{Algorithmica, 2014}] showed that these problems are fixed parameter tractable (FPT), and gave algorithms with the running time $2^{O(k \log k)}n^{O(1)}$. They also asked, as an open problem, whether there exist FPT algorithms which solve these problems in time $2^{O(k)}n^{O(1)}$. In this paper we answer their question in the affirmative: using the technique of computing \emph{representative families of co-graphic matroids} we design algorithms which solve these problems in time $2^{O(k)}n^{O(1)}$. The crucial insight we bring to these problems is to view the solution as an independent set of a co-graphic matroid. We believe that this view-point/approach will be useful in other problems where one of the constraints that need to be satisfied is that of connectivity.

September 18, 2014 12:41 AM

Probabilistic analysis of the (1+1)-evolutionary algorithm

Authors: Hsien-Kuei Hwang, Alois Panholzer, Nicolas Rolin, Tsung-Hsi Tsai, Wei-Mei Chen
Download: PDF
Abstract: We give a detailed analysis of the cost used by the (1+1)-evolutionary algorithm. The problem has been approached in the evolutionary algorithm literature under various views, formulation and degree of rigor. Our asymptotic approximations for the mean and the variance represent the strongest of their kind. The approach we develop is also applicable to characterize the limit laws and is based on asymptotic resolution of the underlying recurrence. While most approximations have their simple formal nature, we elaborate on the delicate error analysis required for rigorous justifications.

September 18, 2014 12:41 AM

The computational power of normalizer circuits over black-box groups

Authors: Juan Bermejo-Vega, Cedric Yen-Yu Lin, Maarten Van den Nest
Download: PDF
Abstract: This work presents a precise connection between Clifford circuits, Shor's factoring algorithm and several other famous quantum algorithms with exponential quantum speed-ups for solving Abelian hidden subgroup problems. We show that all these different forms of quantum computation belong to a common new restricted model of quantum operations that we call \emph{black-box normalizer circuits}. To define these, we extend the previous model of normalizer circuits [arXiv:1201.4867v1,arXiv:1210.3637,arXiv:1409.3208], which are built of quantum Fourier transforms, group automorphism and quadratic phase gates associated to an Abelian group $G$. In previous works, the group $G$ is always given in an explicitly decomposed form. In our model, we remove this assumption and allow $G$ to be a black-box group. While standard normalizer circuits were shown to be efficiently classically simulable [arXiv:1201.4867v1,arXiv:1210.3637,arXiv:1409.3208], we find that normalizer circuits are powerful enough to factorize and solve classically-hard problems in the black-box setting. We further set upper limits to their computational power by showing that decomposing finite Abelian groups is complete for the associated complexity class. In particular, solving this problem renders black-box normalizer circuits efficiently classically simulable by exploiting the generalized stabilizer formalism in [arXiv:1201.4867v1,arXiv:1210.3637,arXiv:1409.3208]. Lastly, we employ our connection to draw a few practical implications for quantum algorithm design: namely, we give a no-go theorem for finding new quantum algorithms with black-box normalizer circuits, a universality result for low-depth normalizer circuits, and identify two other complete problems.

September 18, 2014 12:40 AM

/r/compsci

Resources for creating cellular automata (x-post from r/learnprogramming)

It was suggested I post this here...

This may be the wrong subreddit for this, so feel free to redirect me. I am a mathematics graduate student interested in biology, and would like to build a hybrid cellular automata/individual-based model that couples discrete agents with continuous media. I am familiar with programming (Java, MATLAB), but am mostly self-taught. That is, I know about basic concepts and algorithms, but not much else.

Are there any standard resources for people trying to learn about implementing this type of simulation? Also, do people building these models from scratch, or is there existing software that people use? I have read many papers about the subject, but almost all of them are quite lacking on the actual implementation details.

submitted by dr_jekylls_hide
[link] [1 comment]

September 18, 2014 12:40 AM

StackOverflow

Scala: How can I get the name of a variable class in a function?

I'd like to do something like def foo[V] : String = { V.getName } but of course this isn't correct. I'm using the name of the name of the class to find a file that contains a serialized instance of the class in the file system.

by Steven Noble at September 18, 2014 12:40 AM

CompsciOverflow

Weakening and Contraction

I saw this site saying weakening is a structural rule where the hypotheses or conclusion of a sequent may be extended with additional members and that contraction is a rule where two equal (or unifiable) members on the same side of a sequent may be replaced by a single member (or common instance). However, I can't seem to get the terms yet. There are even these symbols that looks like a T turned counterclockwise.

In simple terms, what are weakening and contraction? Can they even be described without the symbol that looks like a T turned counterclockwise?

by b16db0 at September 18, 2014 12:33 AM

Implementing general vertex folding procedure in an undirected graph

I'm implementing the algorithm presented in "Improved Parameterized Upper Bounds for Vertex Cover" paper (PDF). I'm a bit stumped by the General-Fold procedure. What it should do is reduce the number of vertices (and) edges in the graph by finding specific structures (almost-crowns) and replacing them with a single vertex. (There is also a separate case, but I'm focusing on the described one now).

It goes like this:

Let $I$ be an independent set in $G$ (the graph) and let $N(I)$ be the neighbors of I. Suppose that $|N(I)|=|I|+1$ and for every $\emptyset\neq S \subset I$ we have $|N(S)|\geq{S}+1$.

1) If the graph induced by $N(I)$ is not an independent set, then there exists a minimum vertex cover in $G$ that includes $N(I)$ and excludes $I$). (This is a 'standard' crown case - I have it implemented already with a separate approach.)

2) If the graph induced by $N(I)$ is an independent set, let $G\prime$ be the graph obtained from $G$ by removing $I\cup N()$ and adding a vertex $u_I$ and then connecting $u_I$ to every vertex $v \in G\prime$ such that v was a neighbor of a vertex $u \in N(I)$ in $G$.

My problem is how to find a structure that would fulfill the suppositions about size as well as having an independent neighborhood? It's all roses when every vertex in $I$ is of the same degree, e.g. ($I$ is yellow, $N(I)$ is green):

Degree 2

which reduces to

Reduced 2

or

Degree 3

which reduces to

enter image description here.

If only that was the case - I'd just check the vertices of a specific degree, descending.

But unfortunately this reduction may apply to a set of vertices of differing degrees. An example:

2-4-2

note the different edges between yellow and green. It also reduces to:

2-4-2 reduced

What baffles me is what would the algorithm for that be, since the paper states that given a maximum size $k$ of a vertex cover existing in the graph, it is possible to be done in $O(k^{2}\sqrt{k})$.

Can anybody offer any help or advice or intuition on this?

EDIT: I think I might be understanding neighborhood incorrectly here - if we suppose that $N(I)$ contains the intersection of neighborhoods of every vertex in $I$, this would come down to the first two examples - where every vertex in $I$ is of the same degree...

Any thoughts?

by helluin at September 18, 2014 12:26 AM

Planet Clojure

Convince your boss to use Clojure

Convince your boss to use Clojure by Eric Normand.

From the post:

Do you want to get paid to write Clojure? Let’s face it. Clojure is fun, productive, and more concise than many languages. And probably more concise than the one you’re using at work, especially if you are working in a large company. You might code on Clojure at home. Or maybe you want to get started in Clojure but don’t have time if it’s not for work.

One way to get paid for doing Clojure is to introduce Clojure into your current job. I’ve compiled a bunch of resources for getting Clojure into your company.

Take these resources and do your homework. Bringing a new language into an existing company is not easy. I’ve summarized some of the points that stood out to me, but the resources are excellent so please have a look yourself.

Great strategy and list of resources for Clojure folks.

How would you adapt this strategy to topic maps and what resources are we missing?

I first saw this in a tweet by Christophe Lalanne.

by Patrick Durusau at September 18, 2014 12:18 AM

/r/compsci

QuantOverflow

What is the best way of updating data while using Empirical Mode Decomposition to analyze

I have a question about EMD updating new data points. For an entire time series, from beginning to the end, the EMD preforms quite good using the cubic spline function.

The problem happens when new data points feed in, then after recalculating EMD (including new data) the numbers of output IMFs been change (the IMF data series before and after updating all changed slightly).

I suspect that the cubic spline functions do not have memory. What is the best way of avoid this problem, and calculating EMD and keep old results do not change.

Best!

by user3200376 at September 18, 2014 12:13 AM

CompsciOverflow

Lower space bound on a turing machine accepting palindromes

Let $$ PAL = \lbrace x \in \lbrace 0, 1, \# \rbrace^* | x = rev(x) \rbrace $$ How do I show that a turing machine deciding $PAL$ must use space $\Omega(\log n)$?

I have a feeling that I need to use crossing sequences when crossing the middle of the input tape, but I'm not sure how to relate that to space.

by Mathilde at September 18, 2014 12:12 AM

TheoryOverflow

Are aspects of the 20 questions game ambitious enough for graduate research?

I am finishing a masters program this year and I'm interested in doing a Ph.D. in the field of Artificial Intelligence.

I have an idea I've been tinkering with for a while, but some people have told me that it is not ambitious enough for Ph.D.-level research. I'm not sure if this is because the TCS community already knows all there is to know about it or because I have failed to express myself adequately (bad sales pitch). Based on what I could find online, not all that much has been done.

So, I am going to describe what I thought about doing and I would like to ask if you think this is a promising start idea and if it has the potential to lead to something graduate-level worthy.

I want to research efficient and reliable ways to implement a distributed system that can identify entities based on data collected from its users. To give a concrete example of this, I am going to use the 20 questions game.

Existing implementations of this can be found at http://www.20q.net/ and http://www.akinator.com/ however these are closed: no official details whatsoever on how they do it exist anywhere that I can find (subquestion: in general, if someone does something but does not publish how it's done, how worthy is (much) later research that explains how it can be done?).

There are also no details on the required processing power (performance can easily be seen to be very good, although also hard to quantify exactly - is this because they're smart about whatever it is they do or because they run the site on supercomputers?) and the reliability (how often they guess right, under what circumstances etc.) of these systems.

Of course, one can say "use neural networks", "use bayesian classifiers", "use KNN", "use this special form of decision trees: ____", but how well will these work (is it that obvious that they even will?), how exactly they need to be implemented to work efficiently and reliably, how they can be implemented in a distributed manner that is accesible and cheap (for example, if they could be implemented efficiently using widely-used relational databases such as MySQL, that would be great), how they can be tailored to learn from user inputs and probably other factors seem to be up for research.

The problem is more complicated than it seems I feel, because players can also lie, and you'll notice akinator will guess right in many of those cases as well - so it's not so easy as using a decision tree. As for the other approaches, many questions arise as well, such as scalability, reliability and exact mechanics that are provably best (or at least better than most others) for this specific problem.

At first, based on what I read online, the problem did not seem too complicated, but when I thought and learned more about what people were suggesting, I quickly started to find various flaws. Then I started to write code and noticed even more potential shortcomings, so now I feel that the problem has a lot more that can be learned from it than trivial implementations of various existing concepts.

Does this have any potential or should I start looking someplace else entirely?

by IVlad at September 18, 2014 12:10 AM

StackOverflow

Value restriction woes

I was experimenting with an implementation of Clojure Transducers in F#, and quickly hit the dreaded Value Restriction error.

The whole point of Transducers is to be composable. This is some sample code:

type Reducer<'a,'b,'c> = ('a -> 'b -> 'a) -> 'a -> 'c -> 'a

module Transducers =
   [<GeneralizableValue>]
   let inline map proj : Reducer<'result,'output,'input> =
      fun xf ->
        fun result input ->
            xf result (proj input)

   let inline conj xs x = x :: xs
   let inline toList xf input = List.fold  (xf conj) [] input

   let xform = map (fun i -> i + 9) >> map (fun a -> a * 5)
   //let xs = toList xform [1;2] // if you apply this, type will be fixed to 'a list
                                 // which makes xform unusable with eg 'a seq

Play on dotnetfiddle

GeneralizableValue was supposed to lift the value restriction, but does nothing, it seems. Your mission is to make this code compile without applying toList (Type inference will fix the type to 'a list, so you could not use the same xform with a seq) and without changing the type of xform (at least not in a way so as to make it not composable). Is this simply not possible in F#?

by Robert Jeppesen at September 18, 2014 12:10 AM

Planet Clojure

Zero downtime Clojure deployments

We’re heavily into microservices at uSwitch, with many of them being Clojure Ring applications, and our infrastructure is hosted on Amazon AWS.

One of the advantages of microservices is that horizontal scaling, especially with EC2 hosting them, is simple: add more machines! Unfortunately the use of Clojure, or more specifically the requirement of the JVM and the associated poor startup performance, means that deployments can take an unreasonable amount of time. To resolve this we run a remove-upgrade-add style of deployment: a host machine is removed from the corresponding ELB; the service is upgraded; then the machine is returned to the ELB.

So upgrading a service for us, at the moment, goes something like this:

Current upgrade situation

The steps of this system are:

  1. Initially running with nginx as a reverse proxy to service v1;
  2. Remove first host from the ELB;
  3. Stop service v1 on host;
  4. Update to service v2 on host;
  5. Put host back into the ELB;
  6. Remove next host from the ELB;
  7. Stop service v1 on host;
  8. Update to service v2 on host;
  9. Put host back into the ELB.

Although this works in the majority of cases we’ve been unhappy with this as a solution for several reasons:

  1. If the service is on a single box then we will lose that service for the period of deployment;
  2. The remove-deploy-add deployment means that the overall deployment time is linear with respect to the number of hosts;
  3. If the newly deployed service fails to start properly we can, potentially, lose our entire service infrastructure from the ELB;
  4. Removing a host machine from an ELB may remove more than one service and hence degrade our system.

The solution we decided to investigate, as part of our recent hack day, was based on a simple decision: if a service started listening on a random port then we could run two instances, and therefore two different versions, of the service at the same time. The complications are then the port assignment being random and how to deal with this when it is being reversed proxied by nginx, as well as how to tidy up previous running versions of the service. These issues can be solved though by using a service registry, such as etcd, where our services can store the port and the PID (process ID), and watching for changes with a process like confd.

The hack day was about trying to create a deployment system more like this:

Zero downtime deployment situation

The steps of this are:

  1. Initially running with nginx as a reverse proxy to service v1;
  2. Service v2 starts & signals the previous v1 port and the new v2 port to etcd;
  3. confd detects port change, regenerates nginx configuration & reloads nginx, disconnecting service v1 & connecting service v2;
  4. Service v2 signals the previous v1 PID and the new v2 PID to etcd;
  5. confd detects PID change, generates & executes kill script, killing service 1.

Because of the behaviour of nginx reload, where the master process starts new workers and then kills previous workers, should mean that downtime for the service will be essentially zero.

The solution

The service

We’re going to use Stuart Sierra’s excellent component project to manage the lifecycle of our service which, for the moment, will simply store a random number on initialisation and serve that back in response to any request. Getting Jetty to start on a random, operating system assigned, port is simply a matter of passing zero as the desired port number. If we then communicate this port number in some way that nginx can pick this up we have the ability to run multiple instances of our service at one time and switch the reverse proxy.

Our etcd will be running on the host machine and not clustered: there is no need to communicate this service information outside of the host machine. To communicate the port number from our service to etcd we will employ etcd-clojure, and use a known key, uswitch/experiment/port/current.

We’ll build a component that will store the PID of the service into uswitch/experiment/pid/current and ensure that it is dependent on the service component itself.

We will also retain the previous values for both of these keys in uswitch/experiment/port/previous and uswitch/experiment/pid/previous, which is supported in our code by etcd-experiment.util/etcd-swap-value

The advantage of random port assignment is not only the ability to run the same service multiple times but also the port number is only available after the service has started. Hence we will only write the port and, by the component dependency, the pid information to etcd after the service has been successfully deployed and started.

The infrastructure

The use of etcd might seem overkill except that it allows for the reactions to a newly deployed service to be separated from the service itself: we can watch the etcd keys and react to them in any way we desire, without tightly coupling this into the service itself. confd uses configuration files to react to etcd key changes in order to generate files and run commands, and it’s this that we’ll be using.

Our service will have an nginx configuration file associated with it, written in /etc/nginx/sites-enabled/experiment.conf to enable multiple services to run on an individual host. To achieve this based on the changes to the information in etcd we add a configuration file /etc/confd/conf.d/experiment-nginx.toml to our system that will watch uswitch/experiment/port/current, generating our nginx configuration file and causing nginx to reload its configuration when the value changes. The template for the nginx configuration file is simple, requiring only that we set the randomly assigned port in the output file.

nginx reload causes the master process to start new workers and then kill the old ones, which means that we have zero downtime, from the perspective of client applications. Because of this we do not need to remove the host machine from the ELB in order to update our service and therefore we can drop the remove-upgrade-add deployment in favour of parallel deployment to all machines.

We can clear up the previous service by watching uswitch/experiment/pid/previous with confd and generating a script that can be executed to kill the process with the associated PID:

With all of this in place, and with confd periodically checking, we can start our service for the first time, seeing that the nginx configuration gets generated and nginx restarted. If the service is started a second time the nginx configuration is regenerated & nginx reloaded; the previous service is killed; and, far more importantly, the number that the service returns changes!

If you’re interested on trying this on your own machine there are instructions included in the hack day project.

Conclusion

Hopefully this fairly in-depth walkthrough of the system has convinced you that we have:

  • effectively zero downtime for a deployment, where we used to have a reduction in availability;
  • the ability to deploy across multiple machines in parallel meaning we have a near constant deploy time, where we used to have a linear one;
  • improved reliability as services are replaced only after successfully starting, where before we would have to rollback;
  • isolation of services so that they are unaffected by deployments on the same machine, where before we would degrade more than the service being deployed.

The next real step in this, and one that is really at the core of the microservices architecture, would be to cluster etcd and remove nginx completely: if client applications used the registry to locate the service then none of this would really be necessary. In fact, we would also look to drop etcd for a full service registry, such as consul or zookeeper, the latter already being employed in some of our other projects. This, however, requires much more effort from our many client applications, so it’s a way off!

At the moment this remains a hack day piece of code: it works but it is yet to be truly battle tested. Given that we have many services running across many hosts, and we deploy regularly, this solution would save us a considerable portion of our time and may end up being used in our production systems.

If you’ve solved this in another way please do let us know in the comments.

by uSwitch Tech Blog at September 18, 2014 12:00 AM

Planet Clojure

Chatting cats use DataScript for fun

What DataScript-driven application looks like? Does DataScript really makes the difference? I tried to answer both questions by writing small single-page application. Not to bore you (and myself; mostly myself) with another TodoMVC show-off, I made a simple chat. Meet CatChat:

Check out source code and live version.

CatChat is organized by principles of Flux architecture. DataScript is a storage, clojure/core.async is an event bus, DOM is rendered by React. Some GSS used out of curiosity. Not to complicate things, server-side calls are emulated.

Starting

At the beginning we just create a DB:

(def conn (d/create-conn {}))

Initial data is loaded and pushed directly to DB in onReady callbacks. We assume server sends data in a format that matches client-side DB:

(server/call server/get-rooms []
  (fn [rooms]
    (d/transact! conn rooms)))

(server/call server/whoami []
  (fn [user]
    (d/transact! conn [(assoc user
                         :user/me    true
                         :user/state :loaded)])))

Notice how user object gets augmented. Persistent attributes like :user/name or :room/title come directly from server DB. But some stuff only makes sense on a client: who current user is, which room is selected — session-dependent stuff. We store these transient attributes in the same DataScript database, exactly at the same entities. They will come in handy when queries kick in.

Good thing about code above is that it doesn’t know or care about any other part of the system: how data is rendered, who else listen or cares about data — it doesn’t matter at this point. Initial database population is a small, self-contained piece of logic. Our app will be built from pieces like that: independent, focused and composable. They do not communicate with each other, only thing they share is the database.

Dispatching

Data flow incorporates event bus to make events observable and allow reactions to be added. Event bus opens a lot of possibilities, including events mocking, simulation, logging. Having simple way to get insight into causality graph is crucial for effective understanding of complex systems. Debug, refactoring and optimization benefits follow.

Most importantly, event bus helps with decoupling: nobody knows what parties are interested in particular message; different pieces of functionality can be developed, tested and run independently from each other.

We use core.async pub/sub channel as event bus. Incoming chat messages are delivered via server push and then put! to the channel:

(def event-bus (async/chan))
(def event-bus-pub (async/pub event-bus first))

(server/subscribe
  (fn [message]
    (async/put! event-bus [:recv-msg message])))

Our first consumer just saves messages to the database:

(let [ch (async/chan)]
  (async/sub event-bus-pub :recv-msg ch)
  (go-loop []
    (let [[_ msg] (async/<! ch)]
      (d/transact! conn [msg])))
    (recur))

I put a little twist to data model here. With each message, server sends user id, but whole user entity (name, avatar) is needed for rendering. Thus, we must issue another async request to fetch user data. It’s done in another listener to event stream:

(let [ch (async/chan)]
  (async/sub event-bus-pub :recv-msg ch)
  (go-loop [seen-uids #{}]
    (let [[_ msg] (<! ch)
          uid     (:message/author msg)]
      (if (contains? seen-uids uid)
        (recur seen-uids)
        (do
          (d/transact! conn [(user-stub uid)])
          (load-user uid)
          (recur (conj seen-uids uid)))))))

(defn user-stub [uid]
  { :db/id       uid
    :user/name   "Loading..."
    :user/avatar "avatars/loading.jpg"
    :user/state  :loading })

(defn load-user [uid]
  (server/call server/get-user [uid]
    (fn [user]
      (d/transact! conn [(assoc user
                           :user/state :loaded)]))))

For every incoming message we check if we’ve seen author id already, and if not, then we send request to the server, put temporary user stub to the database (to display placeholders instead of avatar and name), and recur. Once server responds, callback gets called and we store actual user data with the same entity id, overwriting stub attributes.

Note that code above contains an infinite loop that tracks state (seen user ids) naturally — a thing you can’t afford with callbacks. Go blocks are sequential co-programs which give an illusion of parallel execution a-la green threads. Their parking and resuming happens at point where data arrives to or leaves channels. Core.async can do much more beyond simple pub-sub, (think complex topologies of channels, modifiable at runtime), but I couldn’t find a good occasion for that in CatChat.

Rendering

Render is literally a function of two arguments: DB value for building DOM and event bus for communicating events back. Root React component receives a value of a DB as one and only source of data. Having all state as a single, immutable value brings many benefits:

  • Rendering is always consistent. No matter how state mutation and rendering loops work, you take immutable DB snapshot once and render everything from it. User never sees a screen in transient state.
  • Previous states can be stored and reverted to. This makes undo/redo, replays and time traveling trivial.
  • Rendering code does not care how data gets there. You can easily render mock states and do what-if speculations without touching rendering at all.
  • Application state can be remembered and restored trivially (e.g. from localStorage between page reloads).

It’s trivial to know when re-rendering is needed. We just establish a DB listener and trigger re-rendering after each transaction:

(d/listen! conn
  (fn [tx-report]
    (ui/request-rerender (:db-after tx-report) event-bus)))

Independent widget development is also a breeze. All widgets are derived from the same database, but other than that, they do not communicate neither depend on each other. It removes large piece of logic responsible for two-way data flow between UI components: user clicked here, let’s tell everyone what and how they should update. We all love shortcuts, but even in small applications this approach is not sustainable. What UI needs to communicate back to DB goes through the same event bus everybody else in a system uses. After all, rendering is not that special.

Queries

Let’s now dive into the deeps of DataScript usage. Rendering is the main reader of a database, utilizing all sorts of queries.

Simplest possible query selects for each room id its title:

(d/q '[:find ?r ?t
       :where [?r :room/title ?t]]
  db)

Results are always a set of tuples, each tuple consisting of values in :find clause order. In our case it’ll look like this:

#{[12 "Room1"]
  [42 "Room2"]
  ...}

Here we select all unread messages in a specific room:

(d/q '[:find ?m
       :in $ ?r
       :where [?m :message/unread true]
              [?m :message/room ?r]]
  db room-id)

That query does implicit join (all unread messages are inner joined with all messages of a specific room) and has a query parameter (room-id).

Notice that db is also just a parameter for a query. DataScript allows for several databases in a single query (and/or collections, they work the same) and can do effective cross-db joins between them.

This function uses previous query to construct a list of datoms for a transaction that will mark all messages in a room as read:

(defn mark-read [db room-id]
  (let [unread (d/q '[:find ?m
                      :in $ ?r
                      :where [?m :message/unread true]
                             [?m :message/room ?r]]
                    db room-id)]
    (map (fn [[mid]]
           [:db/retract mid :message/unread true])
         unread)))

Aggregates are another handy feature. This query takes, for each room, all messages that satisfy :where clause and then applies count on them, grouping by room:

(d/q '[:find ?r (count ?m)
       :where [?m :message/unread]
              [?m :message/room ?r]]
  db)

Result will look like this:

#{[1 18]
  [2  0]
  [3  2]}

Entities

Take a look at how messages are retrieved by room id:

(let [msgs (->> (d/q '[:find ?m
                       :in $ ?r
                       :where [?m :message/room ?r]]
               db room-id)
             (map first)
             (map #(d/entity db %))
             (sort-by :message/timestamp))])

Query first selects messages for specific room, then results are unpacked (so we have (1 2 3) instead of #{[1] [2] [3]}), then every id gets converted to an entity, and finally all entities are sorted by :message/timestamp attribute.

Sometimes entities are very handy, and you’ll probably use them a lot. Entities are map-like interface to accessing DB: given entity id and DB value, all attributes of that entity id will be in a map (well, sort of — you cannot assoc them, and get is lazy). For example, you have room with id 17:

(def room (d/entity db 17))

Use it to get attribute values as if it was a regular map:

(:room/title room)    => "Room 17"
(:room/selected room) => true
(:db/id room)         => 17

As you access attributes, they get lazily retrieved and cached:

room => { :db/id 17,
          :room/title "Room 17",
          :room/selected true }

Entities are intentionally dumb and simple. They’re just a view at a specific part of specific database version. They do not auto-update when database is changed. They cannot communicate changes back to the database. Entities are not ORM. In essence, entities are just handy way to write [:find ?v :in $ ?e ?a :where [?e ?a ?v]] queries.

Entities also make it easy to walk references. If a value of an attribute is a reference to another entity, it’ll be represented as entity object itself:

(d/entity db msg-id)
=>  {:db/id 10001
     :message/text   "..."
     :message/room   { :db/id 2
                       :room/title "Room2" }
     :message/author { :db/id 17
                       :user/name "Ilya"   }}

For this to work, specify attribute type in schema during initial database creation:

(def conn (d/create-conn {
  :message/room   {:db/valueType :db.type/ref}
  :message/author {:db/valueType :db.type/ref}
}))

Multi-valued relations

DataScript is especially good at multi-valued relations. One-to-many and many-to-many relations are first class. If a group has a list of students, DataScript can support that. If an actor plays in a movie, and movie has a list of an actors, you can model that without intermediate table nonsense.

Relations are two-way. It doesn’t really matter if room contains list of messages or message has a reference to a room. You can query it both ways:

(def conn (d/create-conn {
  :message/room {:db/valueType :db.type/ref}
}))

(d/q '[:find ?m
       :in $ ?r
       :where [?m :message/room ?r]
  db room-id)

(d/q '[:find ?r
       :in $ ?m
       :where [?m :message/room ?r]
  db message-id)

Even if we reverse relation in schema, it wouldn’t really matter:

(def conn (d/create-conn {
  :room/messages {:db/valueType   :db.type/ref
                  :db/cardinality :db.cardinality/many }
}))

(d/q '[:find ?m
       :in $ ?r
       :where [?r :room/messages ?m]
  db room-id)

(d/q '[:find ?r
       :in $ ?m
       :where [?r :room/messages ?m]
  db message-id)

Entities have a nice way to handle references in both directions. In CatChat we use :message/room relation. To access it in forward direction (from message to room):

(get message-entity :message/room) => <room-ent>

Exactly the same relation can be accessed backwards (from room to messages):

(get room-entity :message/_room)   => #{<message-ent>, ...}

Backward-accessed relations always return sets of entities. Forward access returns single entity or a set depending on relation’s arity. All this makes DataScript natural to express and easy to navigate entities graphs.

Resume

Let’s recap:

  1. Event bus is implemented as core.async channel with listeners implemented as independent go loops.
  2. Listeners issue DB transactions to “alter” DB value.
  3. React render is a function of immutable DB value and is triggered after each transaction. Current value of DB is passed as the only property.
  4. Any action in UI, if it want to change something, sends an event to event bus. Loop closes.

For me, that was the best way to write UI application I ever experienced.

Turned out adopting a database is a really good idea for client-side app. Programming languages make it easy to model state as nested dictionaries and arrays, but most data access patterns are more complicated. “I know, I’ll put messages inside rooms! Oh, now I need to count unread messages across all rooms… Oh, now I need to group messages by user id. Ok, I’m screwed”. This is where DataScript shines: you store datom once and look at it from different angles: messages by room, room by message, messages by user, user by having unread messages, messages by unread status, and so on. One-to-many collections, many-to-many relations, reference graphs — it all fits naturally to DataScript. It frees a lot of cognitive resources: you don’t have to invent optimal storage strategy for every next property, messing with all these nested hash map structures, clever rolling caches, consistency issues. In DataScript all data is in one place, it’s normalized, it’s handled uniformly, it’s already optimized — much better than you usually do by hand. And you can query it any way you need.

Project trackers, email clients, calendars, online banks, professional to-do lists are all kinds of client-side apps that are highly structured and can benefit from adopting DataScript. Think Trello or GMail: in any sufficiently complex client-side app there’s a lot of structured data to take care of. I personally sometimes fantasize about rewriting GitHub issues page:

Just imagine how we can store all these tiny issues and all their little properties in DataScript, and then implement all these tabs, buttons and filters on a client, without even touching a server.

This should bring sanity to web app development. Finally, server API is dumb and inflexible, returning, within a single call, all essential data in a bulk dump format. Server is freed from any presentation-level hacks like ?sort_by=name, ?unread=unread or ?flash_message=Saved . That’s all part of presentation logic and must reside on a client.

Where to get more info?

  1. Flux architecture overview

  2. Rich Hickey speaks core.async and why channels are fundamentally better than callbacks: infoq.com/presentations/clojure-core-async (esp. from 32:00)

  3. Stuart Halloway introduces Datalog for Datomic (DataScript syntax is heavily based on Datomic’s one)

  4. While DataScript doesn’t have its own documentation, take a look at Datomic’s docs on queries, transactions, entities and indexes. They are pretty close, with some minor differences

  5. DataScript tests suite can give you a good overview of what’s possible with DataScript

  6. And, of course, don’t forget CatChat codebase

by Nikita Prokopov at September 18, 2014 12:00 AM

Planet Clojure

Lua scripts in Redis within Node.js

We don’t need to tell you what Redis is. Most engineers built a strong love to it over the years for the simplicity and power it gives you.

Although Lua support was introduced quite some time ago, the average internet tells us that it didn’t really kick off.

Here we share our experience with replacing part of the server side written in Node.JS with the Lua being executed in Redis,

Why

So, we have this awesome product Bidder, that plays on the add market exchange. And in order to have all nodes notified of current situation, we use Redis database to share the current state, amount of money available to a particular operation among all the bidders running in the cloud.

This data structure is rather complex and dependent on each other. For example, consider it to be a tree of nodes, where each branch eats cookies

Like so:

a [ ate: 0 ]
|--- ab [ ate: 0 ]
      |--- abe [ ate: 0 ]
      |--- abi [ ate: 0 ]
|--- ac [ ate: 0 ]

So, if the branch abe ate 2 cookies, a should have a balance of two eaten cookies — a [ ate: 2 ]. And then if abi eats 3 and ac eats 1, then it should go like:

a [ ate: 6 = 0 + 2 + 3 + 1 ]
|--- ab [ ate: 5 = 0 + 2 + 3 ]
      |--- abe [ ate: 2 = 0 + 2 ]
      |--- abi [ ate: 3 = 0 + 3 ]
|--- ac [ ate: 1 = 0 + 1 ]

So, previously we would have this big script in NodeJS that will batch all outgoing operations, and respectively update all nodes of the bank, this would be done from a single machine, called Bank. All other machines would connect to it through a BankClient — basically EventEmitter that talks to a Bank service. In this architecture you cannot have two Bank machines, as you would’ve run into concurrency issues that way. There are transactions in Redis, but when you have to combine them together with business logic, it gets rather messy as you cannot simply rollback.

But most importantly of all, this is ultimately not scalable and you have to maintain a lot of synchronization code in NodeJS to interact with your bank’s persistence.

Solution

An absolute win in this case is to incorporate all your logic into Redis. Think about it — you have one single threaded database that performs your business logic requests in an atomic way. And later on you can shard it, by the main key of each data structure being stored. It doesn’t really get any easier than this.

Basically, it’s not a database any longer, it is a persistence container, and you deploy your accounting business logic into it.

How

Now, how do you do this with Redis? By utilizing the EVAL command — http://redis.io/commands/EVAL. It enables you to run Lua scripts inside your Redis instance.

Now, a lot of people might get sceptical once they hear Lua. Another language, another platform, all the burden of maintenance and testing issues.

What we aim to show here — is that all those are addressed in rather classical fashion, by unit-tests and integration tests, and you will find the approach of running Lua on top of Redis surprisingly pleasing.

Getting all together

Let’s put it all together. We will try to address task stated in Why section of this article: a tree-like structure that propagates it changes to all the parent nodes.

First things first — you need to get Lua running locally on your machine in order to run Lua scripts as you are developing them.

I recommend you to install luarocks, a Lua-packet manager (what npm is to NodeJS, cocoapods are to Objective C, mvn to Java, leiningen to Clojure). With brew on Mac:

$ brew install luarocks

One thing to keep in mind, is that together with Lua runtime you will require a bunch of auxiliary libraries that are installed in Redis, in order to be able to write good, full blown scripts.

A quick note on the data format we use in Redis. Since most of our data structures are rather complex and require a set parameters on different levels, we utilize JSON in our Redis values.

The only third party libraries, not native to Lua are struct, CJSON and cmsgpack, as can be seen in EVAL spec. With luarocks it is a no-brainer:

$ luarocks install lua-cjson

Now we are good to write our first Lua script!

Update the node and all parent nodes

In our sample task we will update the node and all it parent nodes. Nodes are keys in Redis that are composed of its parent names and its own name all dot separated.

So, if you have a node a.b.c and you want to persist {spent: 5} with the key a.b.c, the spent amount in both keys a and a.b should also go up.

The basic approach to it will be —

if #ARGV ~= 1 then
    error('your script: there\'s something fishy here, me not like this')
end

if #KEYS ~= 1 then
    error('your script: invalid inputs, one key at a time')
end

local spent = tonumber(ARGV[1])

-- This node will give you parent node. Supposedly if there is a key 
-- nodeOne.nodeTwo.nodeTree, then nodeTwo is a direct parent of
--  nodeThree; and nodeOne is an indirect parent to nodeThree
local parent = function(account)
    -- searches from the end of the string instead
    -- of the beginning, as string.find does
    local i = account:match('^.*()%.')
    if i then
        return string.sub(account, 0, i - 1)
    end
    return nil
end

local spend = nil -- in order to reference to itself in a recursive way
spend = function(key, amount)
    local node = redis.call('GET', key)
    if not node then
        error('your script: key ' .. key .. ' doesnt exist')
    else
        -- read it from Redis and decode
        node = cjson.decode(node)
    end

    -- Nil check
    if not node.spent then
        node.spent = 0
    end

    -- recurse, first spend this amount on the parent nodes
    local parent = parent(key)
    if parent then
        local spentByParent = spend(parent, amount)
    end

    -- spend this amount on the child node
    node.spent = node.spent - amount

    local nodeString = cjson.encode(node)
    local embeddedNodeString = cjson.encode(nodeString) -- escape inner quotes, adds surrounding quotes
    -- parser friendly log
    redis.log(redis.LOG_NOTICE, 'spend.update: { "key":\"' .. key .. '\", "spent":' .. amount .. ', "value":' .. embeddedNodeString ..'}')

    redis.call('SET', key, nodeString)

    return spend
end

-- Since there are no arrays in Lua, only table, keep in mind that they are not 0-based, but 1
spend(KEYS[1], tonumber(ARGV[1]))

Save this file as spend.lua file and let’s run this sweetness.

Running the sweetness

We need to put in some initial values. Normally you would have a separate script to initialize them. For the purpose of this article this will do:

$ redis-cli set a "{\"spent\": 0}"
$ redis-cli set a.b "{\"spent\": 0}"
$ redis-cli set a.b.c "{\"spent\": 0}"

Sweet, now to run the script up-below on node a.b.c with a value of 7:

$ redis-cli evalsha $(redis-cli script load "$(cat ./spend.lua)") 1 "a.b.c" 7

Can you feel it? It just ran your Lua function in Redis. Note how we pass 1 to our script before "a.b.c", it tells Redis how many arguments should be treated as keys and how many as a script arguments, those variables KEYS and ARGV respectively.

Logs and the situation

All logging from your scripts goes to the Redis log, therefore to see what is happening under the covers:

$ tail -F /usr/local/var/log/redis.log

Check your Redis config if not found — redis-cli CONFIG GET logfile.

Transactions and exception handling

Now, for the main part our script is good. Yet, there are some issues to it.

Say, what if a.b is not initialized? Then, according to our implementation we will first increase spent amount on key a, but then we will fail with your script: key a.b doesn't exist.

That will bring our system in an inconsistent state — not cool. Overall we want either all in or all out.

Now there is a fairly simple thing you can do in order introduce transaction like behaviour:

  1. Utilize MSET as a single write operation. Since Redis is single threaded — you don’t have to worry about all those GETs you have
  2. Wrap the whole script with Lua’s pcall — basically a try ... catch block

MSET for a single all in or all out

You’ll need an utility function at the top of your script:

local _cache = {}
local redisMSET = function(_, key, value)
    table.insert(_cache, key)
    table.insert(_cache, value)
end

In code you would use these statements instead of all SETs:

redisMSET('SET', 'a.b', '{"spent": 12}')

And by the end of you script you would commit it all at once with MSET:

redis.call('MSET', unpack(_cache))

You can also go smart about it and do a wrapper around GET that will first check your _cache if it holds the desired key and if not — perform a call with redis.call('GET'....

Wrapping into trycatch block

There is no magic with this one — you wrap the whole thing in pcall and you are done — it gives you status variable, that tells you if it went fine or not and the result of executing that block of code.

The code will go like this —

local main = function()
    
    -- Your code goes here and constructs some `result` object

    return result
end

local status, res = pcall(main)

if status then
    -- `main' went fine: return `result`
    return res
else
    -- `main' raised an error: take appropriate actions
    -- there is no redis.LOG_ERROR level
    redis.log(redis.LOG_WARNING, 'error: ' .. res)
    return redis.error_reply('error: ' .. res)
end

Combining it together with transactions:

local main = function()
    local _cache = {}
    local redisMSET = function(_, key, value)
        table.insert(_cache, key)
        table.insert(_cache, value)
    end

    -- your code goes here and constructs some `result` object

    redis.call('MSET', unpack(_cache))

    return result
end

local status, res = pcall(main)

if status then
    -- `main' went fine: return `result`
    return res
else
    -- `main' raised an error: take appropriate actions
    -- there is no redis.LOG_ERROR level
    redis.log(redis.LOG_WARNING, 'error: ' .. res)
    return redis.error_reply('error: ' .. res)
end

Voilà!

Unit-Testing our sweet Lua awesomeness

Now all this is good, but here at Screen6 we do unit-testing, and just to be sure and because it is a good way to go about things.

There are two ways you can go about tests:

  1. Test your Lua code with Lua unit tests
  2. Test your Lua code on the application level. In our case — calling Lua script within Redis from CoffeeScript

Unit Testing with Busted

We use Busted framework to run unit tests on our Lua scripts. If you are familiar with Mocha, you will feel at home right away.

Testing is rather straightforward, there is only one thing to note, that you would prefer to run tests “as if you are in Redis”, and therefore you would require some wrapper code to mock the API Redis provides you within Lua.

A very basic implementation that will enable you to use these operations:

  1. redis.call( ‘SET’, … )
  2. redis.call( ‘MSET’, … )
  3. redis.call( ‘GET’, … )
  4. redis.log( … )

Will look like this:

cjson = require "cjson"

local DATABASE = {}

function switch(t)
    t.case = function(self, x)
        local f = self[x] or self.default
        if f then
            if type(f) == "function" then
                return f(x, self)
            else
                error("case "..tostring(x).." not a function")
            end
        end
    end
      return t
end

redis = {
    call = function(operation, ...)
        mockRedis = switch {
            ['SET'] = function (x)
                    if #arg ~= 2 then
                        error('invalid amount of argumetns passed on to mockRedis.set')
                    else
                        DATABASE[arg[1]] = cjson.decode(arg[2])
                    end
                end,
            ['MSET'] = function (x)
                    if #arg % 2 ~= 0 then
                        error('invalid amount or arguments (not even) passed on to the mockRedis.mset')
                    else
                        for i = 1, #arg, 2 do
                            DATABASE[arg[i]] = cjson.decode(arg[i + 1])
                        end
                    end
                end,
            ['GET'] = function (x)
                    if #arg ~= 1 then
                        error('invalid amount of arguments passed on to mockRedis.get')
                    else
                        if DATABASE[arg[1]] then
                            return cjson.encode(DATABASE[arg[1]])
                        else
                            return nil
                        end
                    end
                end,
            default = function (x)
                    error('unsupported method ' .. operation .. ' called on mockRedis')
                end,
        }
        return mockRedis:case(operation)
    end,

    LOG_DEBUG = "DEBUG",
    LOG_VERBOSE = "VERBOSE",
    LOG_NOTICE = "NOTICE",
    LOG_WARNING = "LOG_WARNING",

    log = function(level, message)
        --print(level .. "# " .. message)
    end
}

It might be just slightly overwhelming if you are not that much into Lua, although — it’s pretty basic, once you get your head around it, it won’t be a biggy to extend it further, to suit your needs.

Integration test: Spawning Redis process from Node.JS to execute Lua in it

Now, for the simplicity sake of this article I will drop the part about doing integration test through a real instance of Redis.

In short we wrote a wrapper that launches Redis from Node.JS while you are running your Mocha tests in before and after blocks, something like this —

RedisServer = (require './redis_spawn').RedisServer
redisServerOpts =
    workingDirectory: RedisServer::optionsDefault.workingDirectory
    save: true
    timeout: 2000
redisServer = new redisSpawn.RedisServer redisServerOpts

...

    before (done) ->
        redisServer.start () ->
            redisClient = redis.createClient redisServerOpts.config.port
            redisClient.once 'connect', () ->
                redisClient.flushall done

And then interacts with the Lua script that is running on the recently spawned Redis instance. Might there be any interest in it — we’ll put up a separate article on the topic. Don’t hesitate to reach out to us!

In Production

Installing Lua script upon application start

Now that you have your Lua scripts thoroughly tested it is time to run them in production. The basic idea is to load them once on application startup and keep the resulting SHA, so that you won’t have to send the script there and back each and every time upon request.

Instead of thousand of words, the CoffeeScript snippet that will do just that —

redis = require 'redis'

module.exports.YourDbClient =
class RdbClient extends EventEmitter
    constructor: () ->
        @scripts = {
            someScript1:
                text: (fs.readFileSync (path.join __dirname, 'someScript1.lua')).toString()
            someScript2:
                text: (fs.readFileSync (path.join __dirname, 'someScript2.lua')).toString()
                sha1: null
        }
        script.name = name for name, script of @scripts      # copy name -> .name
        @connect()

    connect: (cb) ->
        if @rdb then return cb?()

        @rdb = redis.createClient <redisPort>, <redisHost>, <redisOpts>

        loadScripts = (cb) =>
            loader = @rdb.multi()
            loader.script 'load', @scripts.someScript1.text
            loader.script 'load', @scripts.someScript2.text
            loader.eval @scripts.configure.text, 0, (JSON.stringify { clients: @config.bidder.clients }), Date.now()
            loader.script 'load', @scripts.spend.text
            loader.exec (err, results) =>
                if err
                    cb? err
                    throw err
                @scripts.someScript1.sha1 = results[0]
                @scripts.someScript2.sha1 = results[1]
                cb?()

        @rdb.on 'connect', () =>
            loadScripts () => @emit 'connected'
        @rdb.on 'reconnect', () =>
            loadScripts () => @emit 'reconnected'
        @rdb.once 'end', () =>
            @rdb = null

    actionWithRedis: (args...) ->
        @txn ?= @rdb.multi()
        @operation = [@client.scripts.spend.sha1]
        @txn.evalsha.apply @txn, (@operation.concat args)
        @txn.exec (err, results) =>
            # further sweetness
            ...

Puppet configuration for Jenkins

We utilize Jenkins with JUnit test result reports, Busted works with it just fine with XUnit reporting engine —

$ busted -o junit ./test

And here are the small bits of puppet configuration to enable your Jenkins with Lua, Busted and the necessary libs:

class genericpackages::lua {
    package { "luarocks":
        ensure => installed,
    }
    exec{ "install busted":
        command => "luarocks install busted",
        require => Package['luarocks'],
    }
    exec{ "install cjson redis library":
        command => "luarocks install lua-cjson",
        require => Package['luarocks'],
    }
}

Summary

Four months into the game — works as a charm, easy to monitor. We are very happy with transition of our accounting logic into Lua scripts executed directly in the Redis.

  1. Lua Main Page
  2. LuaRocks
  3. Installing Lua Rocks on Mac
  4. Intro to Lua for Redis programmers

by Screen6HQ at September 18, 2014 12:00 AM

HN Daily

Planet Theory

Fast algorithmic self-assembly of simple shapes using random agitation

Authors: Ho-Lin Chen, David Doty, Dhiraj Holden, Chris Thachuk, Damien Woods, Chun-Tao Yang
Download: PDF
Abstract: We study the power of uncontrolled random molecular movement in the nubot model of self-assembly. The nubot model is an asynchronous nondeterministic cellular automaton augmented with rigid-body movement rules (push/pull, deterministically and programmatically applied to specific monomers) and random agitations (nondeterministically applied to every monomer and direction with equal probability all of the time). Previous work on the nubot model showed how to build simple shapes such as lines and squares quickly---in expected time that is merely logarithmic of their size. These results crucially make use of the programmable rigid-body movement rule: the ability for a single monomer to control the movement of a large objects quickly, and only at a time and place of the programmers' choosing. However, in engineered molecular systems, molecular motion is largely uncontrolled and fundamentally random. This raises the question of whether similar results can be achieved in a more restrictive, and perhaps easier to justify, model where uncontrolled random movements, or agitations, are happening throughout the self-assembly process and are the only form of rigid-body movement. We show that this is indeed the case: we give a polylogarithmic expected time construction for squares using agitation, and a sublinear expected time construction to build a line. Such results are impossible in an agitation-free (and movement-free) setting and thus show the benefits of exploiting uncontrolled random movement.

September 18, 2014 12:00 AM

September 17, 2014

Dave Winer

Grunts and snorts

You gotta love the ingenuity of this keyboard for the new iOS. You type words, and out the other end come Emojis. A product totally in tune with the time. It's grunting and snorting with style. First we got reduced to 140 chars. With Apple Watch it's gone the next step -- a heart beat. By definition, everyone who's alive can express themselves that way. No need for words or punctuation. Soon there will be a watch for your cat. They have heartbeats too. Now Twitter seems opulently verbose. What's next? The real breakthrough will come when we have a device that the dead can use to express themselves.

September 17, 2014 11:57 PM

Lobsters

StackOverflow

Scala general performances and code optimization [on hold]

What about performance of the compiled code of scala related to normal Java?

And What about the overhead of recursion instead of "the standard for/while loop"? It's less performance?

and if Yes, what are the trade off of losing performance?

EDIT

The Answer gave me from Andreas Neumann was almost of I wanted to know. Just missing:

When It's better use Scala instead of Java.

Narrowing the question: When Scala it would preferred for a consistent project instead of Java, or is it better a mix of both?

I don't know, just guessing, maybe for code maintainability?

by Raffaello at September 17, 2014 11:34 PM

sbt-native-packager maven repo upload path is not similar to sbt

My build.sbt has:

name := "myapp"

organization := "com.myorg"

version := "0.1-SNAPSHOT"

scalaVersion := "2.11.1"

publishMavenStyle := true

The publish task in SBT uploads artifacts to Maven at path:

repo/com/myorg/myapp_2.11/0.1-SNAPSHOT/

But the publish in Universal of SBT-NATIVE-PACKAGER uploads artifacts to Maven at path:

repo/com/myorg/myapp_0.1-SNAPSHOT/0.1-SNAPSHOT/

Can someone please suggest on how to change the Maven upload path in SBT-NATIVE-PACKAGER.

Thanks, Keisham

by Keisham at September 17, 2014 11:31 PM

QuantOverflow

Robust Returns-Based Style Analysis

Sharpe's Return-Based Style Analysis is an interesting theory but flawed in practice when working with long-short funds or funds that are changing strategies over shorter periods of time due to the limits of linear regression.

I have found a few papers looking into improvements to make the calculations more robust Markov, Muchnik, Krasotkina, Mottl (2006) seems fairly reasonable for instance. However, they commonly only deal with the time-varying beta issue.

I was wondering if there was anyone out there doing work on the limitations of linear regression for style analysis. I particular more robust variance-covariance matrices for the minimization of the objective function.

by rhaskett at September 17, 2014 11:18 PM

StackOverflow

Wiremock with Scalatra

I followed the example and attempted to use WireMock to mock an authentication service used by a Scalatra app. However, I don't get WireMock and Scalatra to work together. The idea is to provide a mock response for authentication request sent by Scentry to another authentication provider. How do we combine typical Scalatra test:

def unauthenticated = get("/secured") {
  status must_== 400
}

with WireMock stub for:

stubFor(WireMock.post(urlMatching("/some/auth/service*"))
           .willReturn(
             aResponse()
               .withStatus(200)))

by Petteri Hietavirta at September 17, 2014 11:17 PM

Planet Clojure

Compiler introduction of transients to pure functions

I will define the annotative types

typedef Immutable t
typedef Transient t

persistent! :: Transient t -> Immutable t

In Clojure and other functinal languages, the abstraction is provided that (for the most part) values cannot be updated, only new values may be produced. Naively, this means that every update to a value must produce a coppy of the original value featuring the change. More sophisticated implementations may opt for structural sharing, wherein updated versions of some structure share backing memory with the original or source value on the substructures where no update is performed. Substructures where there is an update must be duplicated and updated as in the naive case.

This means that tree based structures which maximize the ammount of sharable substructure perform better in a functional context because they minimize the fraction of a datastructure which must be duplicated during any given update.

Unfortunately however, such structural sharing still carries a concrete cost in terms of memory overhead, garbage collection and cache and performance when compared to a semantically equivalent update in place over a mutable datastructure.

Ideally, we would be able to write programs such that we preserve the abstraction of immutable values, while enabling the compiler or other runtime to detect when intentional updates in place are occuring and take the opportunity to leverage the performance improvements consequent from mutable data in these cases while ensuring that no compiler introduced mutability can become exposed to a user through the intentional API as built by the programmer.

In such a "pure" language, there is only one class of functions, functions from Immutable values to Immutable values. However if we wish to minimize the performance overhead of this model four cases become obvious. λ Immutable → Immutable functions are clearly required as they represent the intentional API that a programmer may write. λ Mutable → Mutable functions could be introduced as implementation details within an λ Immutable → Immutable block, so long as the API contract that no mutable objects may leak is preserved.

Consider the Clojure program

(reduce (partial apply assoc)
    {}
    (map vector
       (range 10000)
       (range 10000)))

This program will sequentially apply the non-transient association operation to a value (originally the empty map) until it represents the identity mapping over the interval [0,9999]. In the naive case, this would produce 10,000 full single update coppies of the map. Clojure, thanks to structural sharing, will still produce 10,000 update objects, but as Clojure's maps are implemented as log₃₂ hash array mapped tries, meaning that only the array containing the "last" n % 32 key/value pairs must be duplicated, more the root node. This reduces the cost of the above operation from T(~10,000²) to T(10,000*64) ≊ T(640,000) which is huge for performance. However, a Sufficiently Smart Compiler could recognize that the cardinality of the produced map is max(count(range(10000)), count(range(10000))), clearly being 10000. Consequently an array map of in the worst case 10000 elements is required given ideal hashing, however assuming a load factor of 2/3 this means our brilliant compiler can preallocate a hashmap of 15000 entries (presumed T(1)), and then perform T(10000) hash insertions with a very low probability of having to perform a hash table resize.

Clearly at least in this example the mutable hash table would be an immense performance win because while we splurge a bit on consumed memory due to the hash table load factor (at least compared to my understanding of Clojure's hash array mapped trie structure) the brilliantly compiled program will perform no allocations which it will not use, will perform no copying, and will generate no garbage compared to the naive structurally shared implementation which will produce at least 9,967 garbage pairs of 32 entry arrays.

The map cardinality hack is it's own piece of work and may or may not be compatible with the JVM due to the fact that most structures are not parametric on initial size and instead perform the traditional 2*n resizing at least abstractly. However, our brilliant compiler can deduce that the empty map which we are about to abuse can be used as a transient and made static when it escapes the scope of the above expression.

Consider the single static assignment form for the above (assuming a reduce definition which macroexpands into a loop) (which Clojure doesn't do).

    [1 ] = partial
    [2 ] = apply
    [3 ] = assoc
    [4 ] = invoke(2, 3, 4)              ;; (partial apply assoc)
    [5 ] = {}
    [6 ] = map
    [7 ] = vector
    [8 ] = range
    [9 ] = 10000
    [10] = invoke(8, 9)                 ;; (range 10000)
    [11] = invoke(6, 7, 10, 10)         ;; (map vector [10] [10])
    [12] = first
    [13] = rest
loop:
    [14] = phi(5,  18)
    [15] = phi(11, 19)
    [16] = if(13, cont, end)
cont:
    [17] = invoke(12, 14)
    [18] = invoke(4, 14, 15)
    [19] = invoke(13, 15)
    [20] = jmp(loop)
end:
    [21] = return(18)

Where the phi function represents that the value of the phi node depends on the source of the flow of control. Here I use the first argument to the phi functions to mean that control "fell through" from the preceeding block, and the second argument to mean that control was returned to this block via instruction 20.

This representation reveals the dataflow dependence between sequential values of our victim map. We also have the contract that the return, above labeled 21, must be of an Immutable value. Consequently we can use a trivial dataflow analysis to "push" the Immutable annotation back up the flow graph, giving us that 18, 14 and 5 must be immutable, 5 is trivially immutable, 18 depends on 14, which depends on 18 and 5, implying that it must be immutable as well. So far so good.

We can now recognize that we have a phi(Immutable, Immutable) on a loop back edge, meaning that we are performing an update of some sort within the loop body. This means that, so long as Transient value is introduced into the Immutable result, we can safely rewrite the Immutable result to be a Transient, and add a persistent! invocation before the return operation. Now we have phi(Immutable, Transient) → Transient which makes no sense, so we add a loop header entry to make the initial empty map Transient giving us phi(Transient, Transient) → Transient which is exactly what we want. Now we can rewrite the loop update body to use assoc! → Transient Map → Immutable Object → Immutable Object → Transient Map rather than assoc → Immutable Map → Immutable Object → Immutable Object → Immutable Map.

Note that I have simplified the signature of assoc to the single key/value case for this example, and that the key and value must both be immutable. This is required as the persistent! function will render only the target object itself and not its references persistent.

This gives us the final operation sequence

    [1 ] = partial
    [2 ] = apply
    [3 ] = assoc!
    [4 ] = invoke(2, 3, 4)              ;; (partial apply assoc)
    [5 ] = transient
    [6 ] = {}
    [7 ] = invoke(5, 6)
    [8 ] = map
    [9 ] = vector
    [10] = range
    [11] = 10000
    [12] = invoke(10, 11)               ;; (range 10000)
    [13] = invoke(8, 9, 12, 12)         ;; (map vector [12] [12])
    [14] = first
    [15] = rest
loop:
    [16] = phi(7,  20)
    [17] = phi(13, 21)
    [18] = if(17, cont, end)
cont:
    [19] = invoke(14, 17)
    [20] = invoke(4, 16, 17)
    [21] = invoke(15, 17)
    [22] = jmp(loop)
end:
    [23] = persistent!
    [24] = invoke(23, 20)
    [25] = return(24)

Having performed this rewrite we've one. This transform allows an arbitrary loop using a one or more persistent datastructures a accumulators to be rewritten in terms of transients if there exists (or can be inferred) a matching Transient t → Transient t updater equivalent to the updater used. Note that if a non-standard library updater (say a composite updater) is used, then the updater needs to be duplicated and if possible recursively rewritten from a Persistent t → Persistent t by replacing the standard library operations for which there are known doubles with their Transient t counterparts until either the rewrite fails to produce a matching Transient t → Transient t or succeeds. If any such rewrite fails then this entire transform must fail. Also note that this transformation can be applied to subterms... so as long as the Persistent t contract is not violated on the keys and values here of the map in a nontrivial example their computation as well could be rewritten to use compiler persisted transients.

Now yes you could have just written

(into {}
   (map vector
      (range 10000)
      (range 10000)))

which would have used transients implicitly, but that requires that the programmer manually perform an optimization requiring further knowledge of the language and its implementation details when clearly a relatively simple transformation would reveal the potential for this rewrite.

^d

by Reid McKenzie at September 17, 2014 11:13 PM

StackOverflow

Spark accumulableCollection does not give correct count intermittently

I am using Spark to do employee record accumulation and for that I use Spark's accumulator. The code works well with integer counting (the example given in the doc) but it does not work consistently with accumulableCollection. Does any one know what's going on? Following is my code that you can run locally and verify.

package demo

import org.apache.spark.{SparkContext, SparkConf, Logging}

import org.apache.spark.SparkContext._
import scala.collection.mutable

object ListAccuApp extends App with Logging {
  case class Employee(id:String, name:String, dept:String)

  val conf = new SparkConf().setAppName("Employees") setMaster ("local[4]")
  val sc = new SparkContext(conf)

  val empAccu = sc.accumulableCollection[mutable.MutableList[Employee], Employee](mutable.MutableList[Employee]())

  val employees = List(
    Employee("10001", "Tom", "Eng"),
    Employee("10002", "Roger", "Sales"),
    Employee("10003", "Rafael", "Sales"),
    Employee("10004", "David", "Sales"),
    Employee("10005", "Moore", "Sales"),
    Employee("10006", "Dawn", "Sales"),
    Employee("10007", "Stud", "Marketing"),
    Employee("10008", "Brown", "QA")
  )

  System.out.println("employee count " + employees.size)


  sc.parallelize(employees).foreach(e => {
    empAccu += e
  })

  System.out.println("empAccumulator size " + empAccu.value.size)
}

by smishra at September 17, 2014 11:03 PM

/r/freebsd

StackOverflow

How to preserve argument type of polymorphic function in return type

Given a polymorphic function, how to match the polymorphic argument and return a value of the same type without resorting to explicit casts?

sealed trait Data
case class DString(s: String) extends Data
case class DInt(n: Int) extends Data

def double[D <: Data](d: D): D = d match {
  case DString(s) => DString(s ++ s)
  case DInt(n) => DInt(n + n)
}

This produces type mismatches (found DString/DInt, required D). Why does the type system not accept this when the input type clearly equals the output type?

by estolua at September 17, 2014 10:13 PM

Lobsters

CompsciOverflow

Not sure if my recurrence is correct for T(n) = 2T(n^.5) + O(1) [duplicate]

This question already has an answer here:

I have

T(n) = 2T(n^.5) + O(1)

     = 2(2T(n^.25) + O(1)) + O(1)

     = 2(2(2T(n^.125) + O(1)) + O(1)) + O(1)

     and so on

To me this seems wrong, and I don't know where to go from here to reach a Big-O solution...

Thanks in advance.

by MMP at September 17, 2014 09:53 PM

/r/clojure

CompsciOverflow

Is there a word that means decentralized AND distributed?

Assuming decentralized does not imply distributed, is there a word that that can be used to mean both decentralized and distributed?

by kag0 at September 17, 2014 09:49 PM

StackOverflow

Partial application in Scala not referentially transparent?

Given two functions:

def f(a: String, b: Int): Int = a.length + b
val g: Int => String = _.toString

why is it that I can compose a partially applied f with g by means of an intermediate assignment:

val f_ = f(_: String, 42)
f_ andThen g
// String => String = <function1>

but not directly:

f(_: String, 42) andThen g
// error: value andThen is not a member of Int

Is this a problem with the type inferencer or somehow expected behavior?

by estolua at September 17, 2014 09:49 PM

TheoryOverflow

Classifying reversible gates

Post's lattice, described by Emil Post in 1941, is basically a complete inclusion diagram of sets of Boolean functions that are closed under composition: for example, the monotone functions, the linear functions over GF(2), and all functions. (Post didn't assume that the constants 0 and 1 were available for free, which made his lattice much more complicated than it would be otherwise.)

My question is whether anything analogous has ever been published for classical reversible gates, like the Toffoli and Fredkin gates. I.e., which classes of reversible transformations on {0,1}n can be generated by some collection of reversible gates? Here are the rules: you're allowed an unlimited number of ancilla bits, some preset to 0 and others preset to 1, as long as all the ancilla bits are returned to their initial settings once your transformation of {0,1}n is finished. Also, a SWAP of 2 bits (i.e., a relabeling of their indices) is always available for free. Under these rules, my student Luke Schaeffer and I were able to identify the following ten sets of transformations:

  1. The empty set
  2. The set generated by the NOT gate
  3. The set generated by NOTNOT (i.e., NOT gates applied to any 2 of the bits)
  4. The set generated by CNOT (i.e., the Controlled-NOT gate)
  5. The set generated by CNOTNOT (i.e., flip the 2nd and 3rd bits iff the 1st bit is 1)
  6. The set generated by CNOTNOT and NOT
  7. The set generated by the Fredkin (i.e., Controlled-SWAP) gate
  8. The set generated by Fredkin and CNOTNOT
  9. The set generated by Fredkin, CNOTNOT, and NOT
  10. The set of all transformations

We'd like to identify any remaining families, and then prove that the classification is complete---but before we spend much time on it, we'd like to know whether anyone has done it before.

by Scott Aaronson at September 17, 2014 09:35 PM

Complexity Book with Slides

Is there a good book on complexity theory (upper level undergraduate and/or graduate) that comes with slides prepared either by the author or by someone else (but that are specifically tailored to correspond to the book)?

by Peasant at September 17, 2014 09:32 PM

StackOverflow

Json4s support for case class with trait mixin

I am trying to serialize scala case class using json4s with jackson support. But for scenarios where i am trying to mixin traits, it fails to serialize the class. Below is a code example.

trait ISearchKey {
    var id:String = ""  
}

When i execute below code i get empty curly brackets, no value serialized, but if i remove trait mixin then CrystalFieldInfo value gets serialized properly

  val fld = new CrystalFieldInfo("Field1") with ISearchKey
  fld.id = "Id1"          
  implicit val formats = Serialization.formats(NoTypeHints)
  val ser = write[CrystalFieldInfo with ISearchKey](fld)
  println(ser)

Would appreciate any insight into this problem. Thanks in advance

by GammaVega at September 17, 2014 09:31 PM

CompsciOverflow

How can error correction codes reduce bit error rate, for same amount of energy [migrated]

I am selfstudying error correction block codes and has a confusion about their performance.

Suppose we have $r$ information bits and we make $k$ coding bits in addition. I am given to understand that, for the same amount of energy per information bit, the coded system performs better compared to an uncoded system, where the channel is additive white Gaussian noise (AWGN) and modulation is BPSK.

This is demonstrated in a simulation, where the bit error rate (BER) is plotted against energy per information bit.

My question is, how does this come to be? When we code the $r$ bits, we in fact spread the energy over $r+k$ bits. Where as in uncoded case all the energy is used on $r$ bits alone. On the other hand the coded bit stream has $r+k$ dimensions where as uncoded steam has $r$ dimensions.

Can someone please explain, mathematically, where the BER improvement comes from?

by seek at September 17, 2014 09:28 PM

/r/compsci

Any reputable CS undergrad programs that can be done online in Canada?

I already have a degree and looking to get a second degree in CS. I don't want to be going to a campus unless I really have to and it seems reasonable to ask that a program for CS be taught entirely online.

Anyone advice would be appreciated.

submitted by crazyol84
[link] [14 comments]

September 17, 2014 09:27 PM

StackOverflow

Clojure + Korma - SUM aggregation query with IF condition

How does sum-if work in Korma?

Here is the sample query

SELECT SUM(if(items.quantities > 1, 1, 0)) AS multiples FROM items;

I got this to work with raw-exec provided by Korma. But, I am interested in knowing how to write this in the Korma syntax.

I have tried looking at http://sqlkorma.com/docs#select

by rtndp at September 17, 2014 09:25 PM

/r/compsci

StackOverflow

How to define a mapping function for GADT in OCaml?

I'm trying to study the possibilities of GADTs in OCaml language and define as strong as possible what exactly a mapping function should do with such types. Unfortunately, I did not manage to finish the definition of the mapping function for a type in the example below. The full text of this example can be found here. I would be grateful for any help in this matter. Thank you all.

type ('a, _) expr =
  | Const : 'a -> ('a, 'a) expr
  | Add : ('a, 'a) expr * ('a, 'a) expr -> ('a, 'a) expr
  | Less : ('a, 'a) expr * ('a, 'a) expr -> ('a, bool) expr
  | Not : ('a, bool) expr -> ('a, bool) expr
  | If : ('a, bool) expr * ('a, 'a) expr * ('a, 'a) expr -> ('a, 'a) expr

by ramntry at September 17, 2014 09:08 PM

UnixOverflow

How to remove or shorten F1 Boot Prompt on NanoBSD 4G USB disk that fail with "fdisk: /boot/mbr: Device not configured"?

I have read the pfSense documentation Remove F1 Boot Prompt, however that doesn't seem to apply to our pfSense-2.1.1-PRERELEASE-4g-amd64-nanobsd_vga-20140131-1030.img installation.

The fdisk -B da0 command fails after Do you want to change the boot code? [n] y with fdisk: /boot/mbr: Device not configured.

The console outputs:

GEOM_PART: integrity check failed (da0, MBR)
GEOM: da0s1: geometry does not match label (16h,63s != 255h,63s).
GEOM: da0s2: geometry does not match label (16h,63s != 255h,63s).

A workaround might be to shorten the "The boot0 Boot Manager" timeout value using

  • boot0cfg -t 1
  • boot0cfg -t 1 da0
  • or boot0cfg -t 1/dev/da0

results in: /usr/sbin/boot0cfg: Device not configured. and after a reboot these commands result in /usr/sbin/boot0cfg: Input/output error..

How to remove or shorten F1 Boot Prompt on Amd64 NanoBSD 4G USB disk?

by Pro Backup at September 17, 2014 09:01 PM

Portland Pattern Repository

DataTau

StackOverflow

Accessing a custom dispatcher defined in application.conf inside a Scala trait

I'm doing a few operations on Futures inside an trait.

trait MyTrait {
  //Future based operations 

}

Instead of using ExecutionContext.Implicits.global for my Future, I want to use one that is defined in my application.conf.

akka {
  my-batch-dispatcher {
    type = Dispatcher
    executor = "fork-join-executor"
    fork-join-executor {
      parallelism-min = 10
      parallelism-factor = 2.0
      parallelism-max = 10
    }
    throughput = 20
  }
}

Inside my actor I can do a lookup to get the execution context.

  implicit val ec = context.system.dispatchers.lookup("akka.my-batch-dispatcher")

Now sure how to do this inside my trait.

by Soumya Simanta at September 17, 2014 08:55 PM

Can Slick update database schemas?

I'm using Slick 2.1.0 with Scala to do insertions and queries into a database. However, I might be using it for table creation as well, with a possible need to update the table's schemas. Can schema updates like this be done with Slick, or can it only do table creation?

by mp94 at September 17, 2014 08:45 PM

ansible include_if_exists

I'm looking for a way to conditionally include a file in an Ansible play, if the file exists. Using "include" unfortunately throws a fatal error if the file doesn't exist. I'm looping through a bunch of packages and installing them and I want to check for an optional config file for each package. See simplified example below:

---
- name: Basic setup of an Ubuntu box
  hosts: all
  vars:
    packages:
      - ack-grep
      - vim
      - zsh
      - htop
      - openssh-server
      - cowsay
  tasks:
    - name: Run package configuration
      action: apt name=$item
      include: "packages/${item}.yml"
      with_items: $packages

As soon as the script tries to include a file that doesn't exist, it stops with an error. I'm sure I'm just trying to do something in the wrong way, but I've been at this for hours and tried everything I can think of, with no results.

by Gerry at September 17, 2014 08:32 PM

/r/freebsd

StackOverflow

Cannot load net.sourceforge.jtds.jdbc.Driver in Tomcat - again

Same error as Cannot load net.sourceforge.jtds.jdbc.Driver in Tomcat but that solution does not work this time. Just completed a Tomcat 8.0.9 to 8.0.12 update on a FreeBSD 10 server and am once again receiving that error even though the jtds jar is in the lib folder. I've downloaded a fresh copy of jtds in case the old one got corrupted and I've also redeployed my WAR (just in case). No change. Obviously rolling back to Tomcat 8.0.9 is an option as a workaround, but I have some time to work on it and it's wise to try to stay up to date on server software... Ideas on why I might be getting this error again and how to solve it?

22-Jul-2014 15:21:17.811 SEVERE [http-nio-443-exec-9] org.apache.catalina.core.StandardWrapperValve.invoke Ser
vlet.service() for servlet [base] in context with path [] threw exception
 com.sscorp.base.exception.SystemException: org.apache.commons.dbcp.SQLNestedException: Cannot load JDBC drive
r class 'net.sourceforge.jtds.jdbc.Driver'
        at com.sscorp.base.util.DBUtils.query(DBUtils.java:175)
        at com.sscorp.base.util.DBUtils.query(DBUtils.java:158)
        at com.sscorp.base.util.DBUtils.findEntitiesBy(DBUtils.java:324)
        at com.sscorp.base.util.DBUtils.findEntityBy(DBUtils.java:315)
        at com.sscorp.base.dao.common.UserDAO.findByUsernameAndPassword(UserDAO.java:50)
        at com.sscorp.base.web.AuthenticationFilter.doFilter(AuthenticationFilter.java:56)
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239)
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
        at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:219)
        at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:106)
        at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:615)
        at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:136)
        at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79)
        at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:610)
        at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88)
        at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:526)
        at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1078)
        at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:655)
        at org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:2
22)
        at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1566)
        at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1523)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.commons.dbcp.SQLNestedException: Cannot load JDBC driver class 'net.sourceforge.jtds.jdb
c.Driver'
        at org.apache.commons.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1136)
        at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:880)
        at org.apache.commons.dbutils.QueryRunner.prepareConnection(QueryRunner.java:334)
        at org.apache.commons.dbutils.QueryRunner.query(QueryRunner.java:483)
        at com.sscorp.base.util.DBUtils.query(DBUtils.java:172)
        ... 24 more
Caused by: java.lang.ClassNotFoundException: net.sourceforge.jtds.jdbc.Driver
        at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1324)
        at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1177)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:259)
        at org.apache.commons.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1130)
        ... 28 more

by Brian Knoblauch at September 17, 2014 08:27 PM

Scala how can I count the number of occurrences in a list

val list = List(1,2,4,2,4,7,3,2,4)

I want to implement it like this: list.count(2) (returns 3).

by Fugees at September 17, 2014 08:27 PM

OpenJDK Windows Distribution

Does anyone know of a OpenJDK distribution for Windows? Specifically, I am looking for JDK 8 32 bit. I found 64-bit distribution: http://www.azulsystems.com/products/zulu

but I really need 32 bit.

by MonkBen at September 17, 2014 08:23 PM

Getting the value out of a Future in Scala

I have the following code snippet that I use to read a record from the database and I'm using ReactiveMongo for this.

val futureList: Future[Option[BSONDocument]] = collection.find(query).cursor[BSONDocument].headOption

val os: Future[Option[Exam]] = futureList.map {
  (list: Option[BSONDocument]) => list match {
    case Some(examBSON) => {
      val id = examBSON.getAs[Int]("id").get
      val text = examBSON.getAs[String]("text").get
      val description = examBSON.getAs[String]("description").get
      val totalQuestions = examBSON.getAs[Int]("totalQuestions").get
      val passingScore = examBSON.getAs[Int]("passingScore").get
      Some(Exam(id, text, description, totalQuestions, passingScore))
    }
    case None => None
  }
}.recover {
  case t: Throwable => // Log exception
  None
}

I do not want to change my method signature to return a Future. I want to get the value inside the Future and return it to the caller.

by user3102968 at September 17, 2014 08:11 PM

AWS

Consistent View for Elastic MapReduce's File System

Many AWS developers are using Amazon Elastic MapReduce (a managed Hadoop service) to quickly and cost-effectively build applications that process vast amounts of data. The EMR File System (EMRFS) allows AWS customers to use Amazon Simple Storage Service (S3) as a durable and cost-effective data store that is independent of the memory and compute resources of any particular cluster. It also allows multiple EMR clusters to process the same data set. This file system is accessed via the s3:// scheme.

Because S3 is designed for eventual consistency, if one application creates an S3 object it may take a short time (typically measured in tens or hundreds of milliseconds) before it is visible in a LIST operation. This small window can sometimes lead to inconsistent results when the output files produced by one MapReduce job are used as the input of another job.

Today we are making EMRFS even more powerful with the addition of a consistent view of the files stored in Amazon Simple Storage Service (S3). If you enable this feature, you can be confident that all of your files will be processed as intended when you run a chained series of MapReduce jobs. This is not a replacement file system. Instead, it extends the existing file system with mechanisms that are designed to detect and react to inconsistencies. The detection and recovery process includes a retry mechanism. After it has reached a configurable limit on the number of retries (to allow S3 to return what EMRFS expects in the consistent view), it will either (your choice) raise an exception or log the issue and continue.

The EMRFS consistent view creates and uses metadata in an Amazon DynamodB table to maintain a consistent view of your S3 objects. This table tracks certain operations but does not hold any of your data. The information in the table is used to confirm that the results returned from an S3 LIST operation are as expected, thereby allowing EMRFS to check list consistency and read-after-write consistency.

Enabling the Consistent View
This feature is not enabled by default. You can, however, enable it when you create a new Elastic MapReduce cluster from the command line, the Elastic MapReduce API, or the Elastic MapReduce Console. Here are the options that are available to you when you use the console:

As you can see, you can also enable S3 server-side encryption for EMRFS.

Here's how you enable the consistent view from the command line when you create a new EMR cluster:

$ aws emr create-cluster --name TestCluster --ami-version 3.2.1 \
  --instance-type m3.xlarge --instance-count 3 \
  --emrfs Consistent=True --ec2-attributes KeyName=YOURKEYNAME

Important Details
In general, once enabled, this feature will enforce consistency with no action on your part. For example, it will create, populate, and update the DynamoDB table as needed. It will not, however, delete the table (it has no way to know when it is safe to do so). You can delete the table through the DynamoDB console or you can add a final cleanup step to the last job on your processing pipeline.

You can also sync a folder to load it into a consistent view. This is useful to add new folders to the view that were not written by EMRFS, or to manually sync a folder being managed by EMRFS. You can log in to the Master node of your cluster and run the emrfs command like this: table:

$ emrfs sync s3://bucket/folder

There is no charge for this feature, but you will pay an hourly charge for the data stored in the DynamoDB table (the first 100 MB is available to you at no charge at part of the AWS Free Usage tier and for the level of provisioned read and write capacity). By default, the table is provisioned for 500 read capacity units and 100 write capacity units. As I noted earlier, you are responsible for deleting the table when you no longer need it.

Be Consistent
This feature is available now and you can start using it today!

-- Jeff;

by Jeff Barr (awseditor@amazon.com) at September 17, 2014 08:03 PM

/r/emacs

Emacs on Mac OS - is it fully compatible?

I'm considering making the switch from a Ubuntu-driven PC to a Macbook Pro (I'm yet to find a PC that offers the same form factor at 13", tips are welcomed). However, switching out my beloved Emacs would be out of the question.

Are there any insufficiencies on the Mac OS version as opposed to the Linux-based one? Or would it make no difference whatsoever?

submitted by Oulipopo
[link] [40 comments]

September 17, 2014 07:47 PM

StackOverflow

Using Absolute Path with scala.xml.XML.loadFile

I am attempting to read in XML file using Scala function scala.xml.XML.loadFile(absolute_path). However I get java.io.FileNotFoundException. Why wouldn't absolute path work? And what would be the work around for this?

Thanks

by Mayumi at September 17, 2014 07:44 PM

Lambda the Ultimate Forum

Extended Axiomatic Language

Axiomatic language is a formal system for specifying recursively enumerable sets of hierarchical symbolic expressions. But axiomatic language does not have negation. Extended axiomatic language is based on the idea that when one specifies a recursively enumerable set, one is simultaneously specifying the complement of that set (which may not be recursively enumerable). This complement set can be useful for specification. Extended axiomatic language makes use of this complement set and can be considered a form of logic programming negation. The web page defines the language and gives examples.

September 17, 2014 07:28 PM

Planet Clojure

Generate random sentence from grammar

Code which generates a random sentence using a specific grammar. The code shows example of reducer function and tranducers.

(def grammar
     {:sentence [[:noun-phrase :verb-phrase]]
      :noun-phrase [[:article :noun]]
      :verb-phrase [[:verb :noun-phrase]]
      :article ["the" "a"]
      :noun ["ball" "car" "man" "woman" "dog" "table" "bed"]
      :verb ["hits" "stole" "saw" "licked" "bites"]})

(declare process-rule)
 
(defn process-keyword-element [rule]
  (cond 
    (keyword? rule) (process-rule (rand-nth (get grammar rule)))   
    :else rule))

(def process-keyword-elements
  (map process-keyword-element))

(defn process-rule-vector [rule]
   (cond
   (vector? rule) (sequence (comp process-elements ucase-elements) rule)
;	(vector? rule) (map process-keyword-element rule)
      :else rule))

(defn build-str 
  ([] nil)
  ([a] a)
  ([a b] (if a (str a " " b) b)))
    
(defn make-sentence []
   (reduce build-str   
     (flatten (process-rule-vector [:sentence]))))

by Hivemind [big-safari.io] at September 17, 2014 07:27 PM

Transducers

Transducers are a new feature in Clojure 1.7, and documentation and examples are sparse. Transducers makes it possible to compose transformations in a natural fashion.

That being said, transducere just makes it possible to work with map and filter functions without having to give them a sequence to work on right away. This allows for code without deep nested maps and filters and allow for a higher level of abstraction and more effective code.

Here are a short example:

; Infinites sequence of integers
(def all-integers (range))

; A composition of tranducers
(def my-trans (comp (map #(* % %)) (filter #(> % 100))))

; Lazily transform the data
(sequence my-trans (take 20 all-integers))

Get up and running with a new project.cls using:

lein new transmogrifier

Remember to change the Clojure version to 1.7.0-alpha2 or higher.

(defproject transmogrifier "0.1.0-SNAPSHOT"
  :description "transmogrifier"
  :url "http://hivemind.big-safari.io/"
  :license {:name "Eclipse Public License"
            :url "http://www.eclipse.org/legal/epl-v10.html"}
  :dependencies [[org.clojure/clojure "1.7.0-alpha2"]]
  :repl-options {:prompt (fn [ns] (str "\u001B[35m[\u001B[32m" ns "\u001B[35m]\u001B[33m?:\u001B[m " ))
                 :welcome (println "Clojure REPL")
                 :init-ns transmogrifier.core})

Here are a more complex example:

(ns transmogrifier.core)

; Define an infinited data source for the example
(def data (cycle ["i" "went" "for" "a" "walk" "in" "the" "park"]))

; Define a uppercase tranducer
(def ucase-op
   (map #(clojure.string/capitalize %)))

; Define a transducer to remove elements of length<2
(def remove-short-op
   (filter #(> (count %) 2)))

; Function used in reducer to merge string, str could also have been used
(defn build-str 
  ([] nil)
  ([a] a)
  ([a b] (if a (str a " " b) b)))

; Compose transducerts and transform test data
(defn demo [size]  
  (reduce build-str 
          (take size 
                (sequence (comp remove-short-op ucase-op) data))))

; Use transduce to transform and reduce sequence
(defn demo2 [size]  
  (transduce (comp remove-short-op ucase-op) 
             build-str 
             (take size data)))

Read more

Cognitect - Transducers are coming

Gigasquidsoftware - Green eggs and transducers

Transducers Are Fundamental

Gist Ptaoussanis

The rules of transducer club

by Hivemind [big-safari.io] at September 17, 2014 07:27 PM

/r/netsec

StackOverflow

File IO population of Java class from Clojure call fails

My Java program builds a data structure after iterating through the contents of several directories. It does this just fine:

public class ProblemDealer {
    ArrayList<ProblemSet> sets;
    int setNum;
    int probNum;

// ... constructors, etc ...

    public int generateProblems () {
    int numProblems = 0;
    for(File file : new File("Problems").listFiles()) {             
            ProblemSet newSet = new ProblemSet(file.getName());         
            sets.add(newSet);   // <------ LINE 36
            System.out.println("Added ProblemSet: " + file.getName());
            for(File problem : file.listFiles()) {                      
        System.out.println("Added Problem: " + problem.getName());
                newSet.addProblem(problem);                             
        numProblems++;
            } // END inner loop (problems)
        } // END outer loop (files)
    return numProblems;
    }
} // END ProblemDealer

Launching it from java, it performs as behaved and I end up with the appropriate array lists, etc. But when I try it from the Clojure REPL (launched from the same place) it doesn't work:

user=> (def my-dealer (ProblemDealer.))
#'user/my-dealer
user=> (.generateProblems my-dealer)
Added ProblemSet: 2x1 Classmates Problems
NullPointerException   project1.ProblemDealer.generateProblems (ProblemDealer.java:36)

So, it reads the first directory fine, but dies after that. What's going on? Why the difference in Clojure and out?

by WorldsEndless at September 17, 2014 06:33 PM

Efficient nearest neighbour search in Scala

Let this coordinates class with the Euclidean distance,

case class coord(x: Double, y: Double) {
  def dist(c: coord) = Math.sqrt( Math.pow(x-c.x, 2) + Math.pow(y-c.y, 2) ) 
}

and let a grid of coordinates, for instance

val grid = (1 to 25).map {_ => coord(Math.random*5, Math.random*5) }

Then for any given coordinate

val x = coord(Math.random*5, Math.random*5) 

the nearest points to x are

val nearest = grid.sortWith( (p,q) => p.dist(x) < q.dist(x) )

so the first three closest are nearest.take(3).

Is there a way to make these calculations more time efficient especially for the case of a grid with one million points ?

by enzyme at September 17, 2014 06:31 PM

CompsciOverflow

How can I teach computer science without using computers?

In some places in the world, people don't usually have access to (and hence little knowledge of) computers, and even if they have, hard- and software are outdated and usage plagued by power outages and such. Access to (good) books also tends to be lacking. How can I teach computer science under such circumstances?

I'm worried that without being able to do experiments and apply what they learn, they won't learn (well) at all even though they are incredibly motivated and devote most of their time to this hobby. Is it possible to teach CS only theoretically?

by Abhimanyu at September 17, 2014 06:26 PM

Less Wrong

Simulate and Defer To More Rational Selves

Submitted by BrienneStrohl • 70 votes • 56 comments

I sometimes let imaginary versions of myself make decisions for me.

I first started doing this after Anna told me (something along the lines of) this story. When she first became the executive director of CFAR, she suddenly had many more decisions to deal with per day than ever before. "Should we hire this person?" "Should I go buy more coffee for the coffee machine, or wait for someone else deal with it?" "How many participants should be in our first workshop?" "When can I schedule time to plan the fund drive?" 

I'm making up these examples myself, but I'm sure you, too, can imagine how leading a brand new organization might involve a constant assault on the parts of your brain responsible for making decisions. She found it exhausting, and by the time she got home at the end of the day, a question like, "Would you rather we have peas or green beans with dinner?" often felt like the last straw. "I don't care about the stupid vegetables, just give me food and don't make me decide any more things!"

She was rescued by the following technique. When faced with a decision, she'd imagine "the Executive Director of CFAR", and ask herself, "What would 'the Executive Director of CFAR' do?" Instead of making a decision, she'd make a prediction about the actions of that other person. Then, she'd just do whatever they'd do!

(I also sometimes imagine what Anna would do, and then do that. I call it "Annajitsu".)

In Anna's case, she was trying to reduce decision fatigue. When I started trying it out myself, I was after a cure for something slightly different.

Imagine you're about to go bungee jumping off a high cliff. You know it's perfectly safe, and all you have to do is take a step forward, just like you've done every single time you've ever walked. But something is stopping you. The decision to step off the ledge is entirely yours, and you know you want to do it because this is why you're here. Yet here you are, still standing on the ledge. 

You're scared. There's a battle happening in your brain. Part of you is going, "Just jump, it's easy, just do it!", while another part--the part in charge of your legs, apparently--is going, "NOPE. Nope nope nope nope NOPE." And you have this strange thought: "I wish someone would just push me so I don't have to decide."

Maybe you've been bungee jumping, and this is not at all how you responded to it. But I hope (for the sake of communication) that you've experienced this sensation in other contexts. Maybe when you wanted to tell someone that you loved them, but the phrase hovered just behind your lips, and you couldn't get it out. You almost wished it would tumble out of your mouth accidentally. "Just say it," you thought to yourself, and remained silent. For some reason, you were terrified of the decision, and inaction felt more like not deciding.

When I heard this story from Anna, I had social anxiety. I didn't have way more decisions than I knew how to handle, but I did find certain decisions terrifying, and was often paralyzed by them. For example, this always happened if someone I liked, respected, and wanted to interact with more asked to meet with them. It was pretty obvious to me that it was a good idea to say yes, but I'd agonize over the email endlessly instead of simply typing "yes" and hitting "send".

So here's what it looked like when I applied the technique. I'd be invited to a party. I'd feel paralyzing fear, and a sense of impending doom as I noticed that I likely believed going to the party was the right decision. Then, as soon as I felt that doom, I'd take a mental step backward and not try to force myself to decide. Instead, I'd imagine a version of myself who wasn't scared, and I'd predict what she'd do. If the party really wasn't a great idea, either because she didn't consider it worth my time or because she didn't actually anticipate me having any fun, she'd decide not to go. Otherwise, she'd decide to go. I would not decide. I'd just run my simulation of her, and see what she had to say. It was easy for her to think clearly about the decision, because she wasn't scared. And then I'd just defer to her.

Recently, I've noticed that there are all sorts of circumstances under which it helps to predict the decisions of a version of myself who doesn't have my current obstacle to rational decision making. Whenever I'm having a hard time thinking clearly about something because I'm angry, or tired, or scared, I can call upon imaginary Rational Brienne to see if she can do any better.

Example: I get depressed when I don't get enough sunlight. I was working inside where it was dark, and Eliezer noticed that I'd seemed depressed lately. So he told me he thought I should work outside instead. I was indeed a bit down and irritable, so my immediate response was to feel angry--that I'd been interrupted, that he was nagging me about getting sunlight again, and that I have this sunlight problem in the first place. 

I started to argue with him, but then I stopped. I stopped because I'd noticed something. In addition to anger, I felt something like confusion. More complicated and specific than confusion, though. It's the feeling I get when I'm playing through familiar motions that have tended to lead to disutility. Like when you're watching a horror movie and the main character says, "Let's split up!" and you feel like, "Ugh, not this again. Listen, you're in a horror movie. If you split up, you will die. It happens every time." A familiar twinge of something being not quite right.

But even though I noticed the feeling, I couldn't get a handle on it. Recognizing that I really should make the decision to go outside instead of arguing--it was just too much for me. I was angry, and that severely impedes my introspective vision. And I knew that. I knew that familiar not-quite-right feeling meant something was preventing me from applying some of my rationality skills. 

So, as I'd previously decided to do in situations like this, I called upon my simulation of non-angry Brienne. 

She immediately got up and went outside.

To her, it was extremely obviously the right thing to do. So I just deferred to her (which I'd also previously decided to do in situations like this, and I knew it would only work in the future if I did it now too, ain't timeless decision theory great). I stopped arguing, got up, and went outside. 

I was still pissed, mind you. I even felt myself rationalizing that I was doing it because going outside despite Eliezer being wrong wrong wrong is easier than arguing with him, and arguing with him isn't worth the effort. And then I told him as much over chat. (But not the "rationalizing" part; I wasn't fully conscious of that yet.)

But I went outside, right away, instead of wasting a bunch of time and effort first. My internal state was still in disarray, but I took the correct external actions. 

This has happened a few times now. I'm still getting the hang of it, but it's working.

Imaginary Rational Brienne isn't magic. Her only available skills are the ones I have in fact picked up, so anything I've not learned, she can't implement. She still makes mistakes. 

Her special strength is constancy

In real life, all kinds of things limit my access to my own skills. In fact, the times when I most need a skill will very likely be the times when I find it hardest to access. For example, it's more important to consider the opposite when I'm really invested in believing something than when I'm not invested at all, but it's much harder to actually carry out the mental motion of "considering the opposite" when all the cognitive momentum is moving toward arguing single-mindedly for my favored belief.

The advantage of Rational Brienne (or, really, the Rational Briennes, because so far I've always ended up simulating a version of myself that's exactly the same except lacking whatever particular obstacle is relevant at the time) is that her access doesn't vary by situation. She can always use all of my tools all of the time.

I've been trying to figure out this constancy thing for quite a while. What do I do when I call upon my art as a rationalist, and just get a 404 Not Found? Turns out, "trying harder" doesn't do the trick. "No, really, I don't care that I'm scared, I'm going to think clearly about this. Here I go. I mean it this time." It seldom works.

I hope that it will one day. I would rather not have to rely on tricks like this. I hope I'll eventually just be able to go straight from noticing dissonance to re-orienting my whole mind so it's in line with the truth and with whatever I need to reach my goals. Or, you know, not experiencing the dissonance in the first place because I'm already doing everything right.

In the mean time, this trick seems pretty powerful.
56 comments

September 17, 2014 06:11 PM

Lobsters

No, Nate, brogrammers may not be macho, but that’s not all there is to it

“How French High Theory and Dr. Seuss can help explain Silicon Valley’s Gender Blindspots”

Comments

by englishm at September 17, 2014 06:10 PM

/r/compilers

Code review for my new language's compiler?

I have been creating the language Wake (wakelang.com), and been slowly learning the gap between knowing the algorithms of a compiler, and implementing them all well, especially with some of my own languages ideas. The compiler is written in C++ at github.com/MichaelRFairhurst/wake-compiler

I haven't gotten to optimizations yet as I am focusing on features before performance.

I welcome critique on absolutely anything -- using bison, using c++, using c, my class/method names, my error collection mechanisms, my tests, my c++ abilities (only recently starting truly appreciating raii), the grammar or language or goals of the project.

Thanks to any who can find find the time!

submitted by developer-mike
[link] [2 comments]

September 17, 2014 06:09 PM

CompsciOverflow

Why is the processor's pipeline delay calculated as N*max(Delay) ? why not N*(D1 + D2 + D3 ... )?

Consider a four stage pipeline, and each stage has delays D1, D2, D3 and D4, so the total delay because of the various stages should be N * (D1 + D2 + D3 + D4) where N is the number of instructions, but I see that this is not the case, I see here that the delay is calculated as N*max(D1,D2,D3,D4) , why is only the max value taken into account ?

I try to apply the above two ideas for the below chart and none of them work.

For the below chart consider that the numbers on top are the time and the S1, S2, S3 and S4 denote the time spent any stage of an instruction on the pipeline

with N*(max(S1,S2,S3,S4)) = 4*(4) = 16,
and with N*(S1 + S2 + S3 + S4) = 4*(1+2+4+1) = 32

1  2  3  4  5  6  7  8  9  10 11 12 13 14 15 16 17 18 19 20
S1 S2 S2 S3 S3 S3 S3 S4
   S1    S2 S2       S3 S3 S3 S3 S4
      S1       S2 S2             S3 S3 S3 S3 S4
         S1          S2 S2                   S3 S3 S3 S3 S4

by vikkyhacks at September 17, 2014 06:01 PM

Portland Pattern Repository

/r/compsci

Search algorithms and finding a specific distance

All, I am interested in learning about search. I have a passing knowledge of the major algorithms (Dijkstra’s,A*,BFS,DFS).

I am interested on learning more about them but find a lot of the explanations on the wen overwhelming- can anyone recommend good books and resources?

Secondly, I am interested in implementing search of a specified distance, therefore I am not interested in the shortest route but any route that is close to a pre-decided distance. What algorithm would suit this best? Would you still use bidirectional Dijkstra’s just with a different termination condition?

submitted by matt182
[link] [comment]

September 17, 2014 05:44 PM

RFC: Request for Cryptographers

We're going to start having regular AMA / Ask The Experts threads. Since crypto is (1) awesome and (2) relevant to ongoing world events (NSA spy scandals, Iran sabotage, &c), I think it's a good topic to start with.

If you'd like to answer questions and are a cryptography researcher (or work on major crypto software), speak up! If you PM me some proof of your identity I'll flair your username and designate you for the Q&A thread.

submitted by cypherx
[link] [8 comments]

September 17, 2014 05:36 PM

/r/clojure

StackOverflow

"Trickiness" to calling Scala code from Java?

There is a Scala library (that only exists written in Scala) that I really want to use in my Java app. I am trying to evaluate whether I can do this or not without suffering any hidden gotchyas/caveats/pitfalls. So I went straight to the Scala FAQ, where they answer this very question (well, sort of):

Accessing Java classes from Scala code is no problem at all. Using a Scala class from Java can get tricky, in particular if your Scala class uses advanced features like generics, polymorphic methods, or abstract types.

I then found several other sites (such as this one) that seem to indicate there is no problem in calling Scala from inside Java, as it is all compiled JVM bytecode.

So I have two very conflicting sources of information, and I'm stuck in the middle trying to determine what use cases make calling Scala from Java "tricky", and which use cases are straight forward. Any ideas?

by smeeb at September 17, 2014 05:09 PM

Fefe

Portland Pattern Repository

StackOverflow

openshif cloudcomputing configuration,is possible complete on cloud?

Is possible build a bigdata application on cloud with RED HAT'PaaS OpenShift? I'm looking how build on cloud an Scala Application with Hadoop (HDFS),Spark,an Apache Mahout but i can't find any thing about it.I've seen something with HortonWorks but nothing clear about haow install it and openshift environment an how add HDFS node in Cloud too.Is it possible with OpneShift?

It's possible in Amazon but my question is : IS possible in OpenShift ??

by dipo at September 17, 2014 05:00 PM

CompsciOverflow

How to simplify the sum over 1/i?

With the recurrence relation: $$ T(n) = 2T\left(\frac{n}{2}\right) + \frac{n}{\log(n)}$$

The "sum for all levels" in the recurrence tree is: $$ \sum_{i=0}^{\log n -1} \frac{n}{\log n - i} = \sum_{i=1}^{\log n} \frac{n}{i} = n \sum_{i=1}^{\log n} \frac{1}{i}$$

Inside the analysis of the recurrence, $\sum 1/i$ appears and then bounded by $\Theta(\log\log n)$. Why is this?

by Roy Kesserwani at September 17, 2014 04:41 PM

StackOverflow

How to create a new folder?

Or more specifically, how to create a new folder with a random name in scala?

In Java the code was this:

  val folderPath: Path = Paths.get("src/test/resources/test-documents/")
  val tmpDir: Path = Files.createTempDirectory(folderPath, null)

thanks to all

by YoBre at September 17, 2014 04:32 PM

Planet Theory

NSF Secure & Trustworthy Cyberspace solicitation and joint program with US-Israel BSF

(1) The new NSF Secure and Trusthworthy Cyberspace (SaTC) solicitation has been released.  Note that Frontiers (up to $10M) have been replaced by Large (up to $3M) proposals.  The submission deadlines are:

  • Small: January 14, 2015
  • Medium: November 10, 2014
  • Large: November 20, 2014
  • Education:    December 19, 2014

(2) NSF and the US-Israel Binational Science Foundation (BSF) have developed a collaboration arrangement whereby US researchers may receive funding from the NSF and collaborating Israeli researchers may receive funding from the BSF. Proposals may be submitted to the SaTC Small category, with the identical proposal submitted to BSF to support Israeli collaborators.  Proposals will be reviewed by NSF; those selected for funding will have separate agreements with NSF (for US researchers) and BSF (for Israeli researchers).

by salilvadhan at September 17, 2014 04:31 PM

CompsciOverflow

Maximum sum subset of an array with an extra condition

We are given numbers $n \leq 200$, $k \leq 10$ and an array of $3n$ positive integers not greater than $10^6$. Find the maximum possible sum of a subset of elements of this array, such that in every contiguous $n$ elements there are at most $k$ chosen.

As this is an old high-school level contest problem, I ask for some hints. Also, this means I know there exists a solution far quicker than the proposed by D.W. and most likely not very complicated, so the question is still open.

My ideas mostly involved dynamic programming. I was trying to calculate, for each prefix of the array, best score we can acquire. However, in order to do this, I think I would need to calculate it for every prefix, and every possible choosing of $k$ numbers in the last $n$ numbers of this prefix, resulting in complexity $O(n^{k+1})$, which is far from acceptable. Also, I thought of looking at pairs of positions in the array which are distant by $n$ and their relation to each other, but this approach fails, as in an optimal solution not in every contiguous $n$ element we would choose exactly $k$, sometimes it could be less.

by Cris at September 17, 2014 04:29 PM

StackOverflow

Clojure style / idiom: creating maps and adding them to other maps

I'm writing a Clojure programme to help me perform a security risk assessment (finally gotten fed-up with Excel).

I have a question on Clojure idiom and style.

To create a new record about an asset in a risk assessment I pass in the risk-assessment I'm currently working with (a map) and a bunch of information about the asset and my make-asset function creates the asset, adds it to the R-A and returns the new R-A.

(defn make-asset
  "Makes a new asset, adds it to the given risk assessment
  and returns the new risk assessment."
  [risk-assessment name description owner categories
   & {:keys [author notes confidentiality integrity availability]
      :or   {author "" notes "" confidentiality 3 integrity 3 availability 3}}]
  (let [ia-ref (inc (risk-assessment :current-ia-ref))]
    (assoc risk-assessment
      :current-ia-ref ia-ref
      :assets (conj (risk-assessment :assets)
                    {:ia-ref ia-ref
                     :name name
                     :desc description
                     :owner owner
                     :categories categories
                     :author author
                     :notes notes
                     :confidentiality confidentiality
                     :integrity integrity
                     :availability availability
                     :vulns []}))))

Does this look like a sensible way of going about it?

Could I make it more idiomatic, shorter, simpler?

Particular things I am thinking about are:

  • should make-asset add the asset to the risk-assessment? (An asset is meaningless outside of a risk assessment).
  • is there a simpler way of creating the asset; and
  • adding it to the risk-assessment?

Thank you

by Edward Kenworthy at September 17, 2014 04:25 PM

Slick 2.0 - generic enum mapper

I have problem with upgrading from Slick 1.0 to Slick 2.0 in my Play apllication.

In my app I have some User defined types example:

sealed trait UserStatus

case object NewUser extends UserStatus
case object ActiveUser extends UserStatus
case object BlockedUser extends UserStatus

object UserStatus extends SerializableEnum[UserStatus] {
  def mapping = Map[String, UserStatus](
    "NEW" -> NewUser,
    "ACTIVE" -> ActiveUser,
    "BLOCKED" -> BlockedUser
  )
}

and generic mapper for slick 1.0:

trait SerializableEnum[T] {
  def mapping: Map[String, T]

  def reverseMapping = mapping.map(_.swap)

  implicit def enumWrites = new Writes[T] {
    def writes(o: T): JsValue = JsString(reverseMapping(o))
  }

  implicit val enumReads = new Reads[T] {
    def reads(json: JsValue): JsResult[T] = json match {
      case JsString(s) => JsSuccess(mapping(s))
      case _ => JsError("Enum type should be of proper type")
    }
  }
  implicit val enumTypeMapper = MappedTypeMapper.base[T, String](reverseMapping, mapping)
}

After migration to Slick 2.0, new mapper wasn't work:

trait SerializableEnum[T] {
  def mapping: Map[String, T]

  def reverseMapping:Map[T,String] = mapping.map(_.swap)

  implicit def enumWrites = new Writes[T] {
    def writes(o: T): JsValue = JsString(reverseMapping(o))
  }

  implicit val enumReads = new Reads[T] {
    def reads(json: JsValue): JsResult[T] = json match {
      case JsString(s) => JsSuccess(mapping(s))
      case _ => JsError("Enum type should be of proper type")
    }
  }
  implicit val enumTypeMapper = MappedColumnType.base[T, String](reverseMapping, mapping)
}

Compiler says:

No ClassTag available for T

It is not possible to define ClassTag type for trait. Does anybody have idea how to resolve it?

I prepared example app for this: https://github.com/mgosk/slick-test

by mgosk at September 17, 2014 04:22 PM

Implement interface using a member

I would like to extend a class Mesh, and I would prefer not using inheritance for this, but a member instead (I want this because I already have many classes derived from Mesh). I would like to redirect most (perhaps all?) member functions to the original implementation. I can explicitely redirect them one by one, but perhaps there is some more concise way? At the very minimum I would like to avoid having to repeat the parameter lists for the functions.

Or perhaps an implicit conversion from ExtMesh to Mesh would handle the "old" interface (without using the extend Mesh), allowing me to only add new functionality?

abstract class Mesh {
  def Func1(a:Int, b:Float, c:String) : Unit
  def Func2(a:Float, b:Float, c:Int) : Int
}


class ExtMesh extends Mesh
{
  Mesh mesh
  def Func1(a:Int, b:Float, c:String) = mesh.Func1(a,b,c)
  def Func2(a:Float, b:Float, c:Int) = mesh.Func2(a,b,c)

  def ExtFunc3(a:String, b:Float)
}

by Suma at September 17, 2014 04:09 PM

CompsciOverflow

Randomized algorithm to make a Binary Search Tree from an array of $n$ distinct elements

An array $\mathcal{A}$ of $n$ distinct integers $\{a_1,a_2,\ldots,a_n\}$ is given. I'm asked to design a randomized (esp. Las Vegas) algorithm to make a Binary Search Tree out of these elements, such that the height of the tree is $\lceil \log{_{2-\epsilon}n}\rceil$, where $\epsilon=\frac{2}{9}$.

If we had to make a perfectly balanced binary search tree, we need to choose the median of the $n$ elements as root in every recursive call. But the height is not exactly $\lceil \log{_{2}n}\rceil$; there is a relaxation by $\epsilon$. So instead of choosing the median we need to choose any value (randomly) from the middle $x\%$ of the sorted version of $\mathcal{A}$. For example, suppose$\mathcal{A}=\{4,2,3,6,1,5,7,10,9,8\}$, $\therefore\mathcal{A_{sorted}}=\{1,2,3,4,5,6,7,8,9,10\}$. Now because the height is not exactly $\lceil\lg n\rceil$, so instead of choosing the median, we need to choose one element randomly from the middle $30\%$ of the elements (say for example), $i.e$ one element randomly from $\{4,5,6\}$. Here this value $30\%(x\%)$ will depend on $\epsilon\ (\frac{2}{9})$. My question is how to derive this $x\%$ from the height information given $i.e$ $\lceil\log{_{2-\epsilon}n}\rceil$ ? $$\lceil\log{_{2-\epsilon}n}\rceil = \left\lceil\frac{\log{_{2}n}}{\log{_{2}{(2-\epsilon)}}}\right\rceil=\left\lceil\frac{\log{_{2}n}}{0.83}\right\rceil$$

I don't need a full explanation, any hint would suffice.

by dibyendu at September 17, 2014 04:09 PM

QuantOverflow

Is this a reasonable approach to determine the relative importance of valuation factors?

I am trying to come up with a measure of relative importance of a number of valuation factors. I am wondering whether correlation coefficients can't be used for determining this.

More on the issue: My goal is to determine whether sectors are over or undervalued when comparing current valuation metrics with historical. For example, let's assume I have P/B and P/E metrics for the consumer discretionary sector and on a P/B basis, the sector is undervalued but on a P/E basis it is overvalued. I am trying to build a way to sort this out to come up with a over/undervalued determination when you have valuation metrics that are contradictory.

My thought is to use metrics in time t-1 and returns in time t. Using this data I am thinking I can find correlations and use the correlation coefficients as the relative weights when trying to build a mechanism to determine whether a sector is over or undervalued.

Is this a reasonable approach? Any recommendations you have would be helpful. Thanks!

by Andrei at September 17, 2014 04:05 PM

Lobsters

High Scalability

The FireBox Warehouse Scale Computer in 2020 Will Have 1K Sockets, 100K Cores, 100PB NV RAM, and a 4Pb/s Network

That's the eye popping prediction from Krste Asanović, University of California, Berkeley, in a presentation he gave at FAST '14 titled: FireBox: A Hardware Building Block for 2020 Warehouse-Scale Computers (pdf).

FireFox looks system like this:

Trends in Warehouse Scale Computers (WSC)s:

by Todd Hoff at September 17, 2014 03:56 PM

/r/netsec

Lobsters

StackOverflow

Why isn't Python very good for functional programming?

I have always thought that functional programming can be done in Python. Thus, I was surprised that Python didn't get much of a mention in this question, and when it was mentioned, it normally wasn't very positive. However, not many reasons were given for this (lack of pattern matching and algebraic data types were mentioned). So my question is: why isn't Python very good for functional programming? Are there more reasons than its lack of pattern matching and algebraic data types? Or are these concepts so important to functional programming that a language that doesn't support them can only be classed as a second rate functional programming language? (Keep in mind that my experience with functional programming is quite limited.)

by David Johnstone at September 17, 2014 03:35 PM

/r/netsec

StackOverflow

Scala's Slick with multiple PK insertOrUpdate() throws exception ERROR: syntax error at end of input

I am using Scala' Slick and PostgreSQL. And I am working well with tables with single PK. Now I need to use a table with multiple PKs:

case class Report(f1: DateTime,
    f2: String,
    f3: Double)

class Reports(tag: Tag) extends Table[Report](tag, "Reports") {
    def f1 = column[DateTime]("f1")
    def f2 = column[String]("f2")
    def f3 = column[Double]("f3")

    def * = (f1, f2, f3) <> (Report.tupled, Report.unapply)
    def pk = primaryKey("pk_report", (f1, f2))
}

val reports = TableQuery[Reports]

when I have empty table and use reports.insert(report) it works well. But when I use reports.insertOrUpdate(report) I receive and exception:

Exception in thread "main" org.postgresql.util.PSQLException: ERROR: syntax error at end of input
  Position: 76
    at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2102)
    at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1835)
    at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:257)
    at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:500)
    at ....

What am I doing wrong? How to fix it?

Thanks in advance.


PS. I tried workaround - tried to implement "if exist update else insert" logic by:

  val len = reports.withFilter(_.f1 === report.f1).withFilter(_.f2 === report.f2).length.run.toInt
                    if(len == 1) {
                        println("Update: " + report)
                        reports.update(report)
                    } else {
                        println("Insert: " + report)
                        reports.insert(report)
                    }

But I still get exception on update:

Exception in thread "main" org.postgresql.util.PSQLException: ERROR: duplicate key value violates unique constraint "pk_report"
  Detail: Key ("f1", f2)=(2014-01-31 04:00:00, addon_io.aha.connect) already exists.

by Konstantin Trunin at September 17, 2014 03:28 PM

QuantOverflow

How can I calculate Fama-French betas for a particular stock?

For a particular stock, what's the simplest way to calculate betas for the Fama-French factors SMB and HML?

by user939259 at September 17, 2014 03:13 PM

DataTau

/r/compsci

Advice for Online Courses

Hello World,

What are the best online courses you've participated in? Funnest? Interesting?

And the worse?

submitted by merhoo
[link] [1 comment]

September 17, 2014 02:59 PM

/r/scala

TheoryOverflow

Graph Isomorphism and APSP matrix [on hold]

I am trying graph Isomorphism using APSP(AllPairShortestPath) matrix.

 I would like to know-

1.Is there any such algorithm? 2. Why do you think this would not work?(share your intuitive or rigorous argument) 3.can you construct a counter example that would not work in such algorithm? 4.how can I construct such counterexample?

I have tried graphs found in below link-

http://funkybee.narod.ru/graphs.htm

no 34 showed different APSP, no 33 were same.

by jim198810 at September 17, 2014 02:54 PM

Problems which are solvable using Linear Programming

Can anyone share a link to a good survey/book about the different problem types for which we have a linear programming based solution for, as well as the related techniques?

by user3246971 at September 17, 2014 02:49 PM

StackOverflow

Nodejs app can't connect to local MySQL on FreeBSD 8.2-STABLE

guys.

I've installed MySQL module for node.js using npm install mysql and bumped into connection error.

Here's my js code.

var mysql      = require('mysql');
var connection = mysql.createConnection({
  host     : 'localhost',
  user     : 'user',
  password : 'userpassword',
  database : 'node'
});

connection.connect();

connection.query('SELECT * FROM table', function(err, rows, fields) {
  if (err) throw err;
  console.log('The solution is: ' + fields);
});

connection.end();

Here's the output I get:

Error: ER_ACCESS_DENIED_ERROR: Access denied for user 'node'@'146.66.*.*' (using password: YES)
at Handshake.Sequence._packetToError (/usr/local/www/apache22/data/node/node_modules/mysql/lib/protocol/sequences/Sequence.js:48:14)
at Handshake.ErrorPacket (/usr/local/www/apache22/data/node/node_modules/mysql/lib/protocol/sequences/Handshake.js:101:18)
at Protocol._parsePacket (/usr/local/www/apache22/data/node/node_modules/mysql/lib/protocol/Protocol.js:270:23)
at Parser.write (/usr/local/www/apache22/data/node/node_modules/mysql/lib/protocol/Parser.js:77:12)
at Protocol.write (/usr/local/www/apache22/data/node/node_modules/mysql/lib/protocol/Protocol.js:39:16)
at Socket.<anonymous> (/usr/local/www/apache22/data/node/node_modules/mysql/lib/Connection.js:82:28)
at Socket.EventEmitter.emit (events.js:95:17)
at Socket.<anonymous> (_stream_readable.js:746:14)
at Socket.EventEmitter.emit (events.js:92:17)
at emitReadable_ (_stream_readable.js:408:10)
--------------------
at Protocol._enqueue (/usr/local/www/apache22/data/node/node_modules/mysql/lib/protocol/Protocol.js:135:48)
at Protocol.handshake (/usr/local/www/apache22/data/node/node_modules/mysql/lib/protocol/Protocol.js:52:41)
at Connection.connect (/usr/local/www/apache22/data/node/node_modules/mysql/lib/Connection.js:108:18)
at Object.<anonymous> (/usr/local/www/apache22/data/node/local_db.js:9:12)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Function.Module.runMain (module.js:497:10)
at startup (node.js:119:16)

The thing is that instead of connecting to MySQL via localhost (as I specified in connection options) it goes through externalIPaddress (for no obvious reason) and firewall drops connection in accordance with it's rules.

I've searched for solutions in other topics, but either error output or suitable solutions differ a lot.

I've also tried popular solutions for related problems from other questions and this is the list of options, that didn't work for me: 1. Flushing MySQL privileges table after new user creation and granting necessary privileges. 2. Modifying /etc/hosts to get proper answer from nslookup localhost and nslookup localhost. 3. Manually pointing path to MySQL socket also hasn't change anything. 4. I do not use host and server options at the same time. 5. I've tried 3306 port specifying.

After all, I still get the same error.

Thank's for your help in advance.

by avermann at September 17, 2014 02:48 PM

Equivalent of Akka ByteString in Scala standard API

Is anyone aware of a standard API equivalent to Akka's ByteString: http://doc.akka.io/api/akka/2.3.5/index.html#akka.util.ByteString

This very convenient class has no dependency on any other Akka code, and it saddens me to have to import the whole Akka jar just to use it.

I found this fairly old discussion mentioning adding it to the standard API, but I don't know what happened to this project: https://groups.google.com/forum/#!msg/scalaz/ZFcjGpZswRc/0tCIdXvpGBAJ

Does anyone know of an equivalent piece of code in the standard API? Or in a very lightweight library?

by vptheron at September 17, 2014 02:47 PM

Error when running ansible-playbook

I've installed Ansible 1.2.3 on Ubuntu Precise 64.

Running ansible-playbook -i ansible_hosts playbook.yml give me this error:

ERROR: problem running ansible_hosts --list ([Errno 8] Exec format error)

Here's the content of ansible_hosts:

[development]
localhost   ansible_connection=local

and playbook.yml:

---
- hosts: development
  sudo: yes
  tasks:
    - name: install curl
      apt: pkg=curl update_cache=yes

How can I make this work?

by mll at September 17, 2014 02:44 PM

Using for loops within functions in R

Goal: take a data.frame with headers and return a new data.frame with additional variables created from calculations within a function.

My existing code works for creating a transform of a data.frame:

transform<- function(x) {
  transformtemp<- x
  transformtemp[1]<- x[1]
  for(i in 2:length(x)) {
    transformtemp[i]<- x[i] + 0.9*transformtemp[i-1]
  }
  transformscale<- sum(x)/sum(transformtemp)
  x<- transfomrtemp*transformscale
}

x[]<- lapply(x,transform)

With this code, I get back a data.frame with the function applied to all columns of my data.

I need Help with: 1. As of now, this code only uses 0.9 as my decay parameter. I want to create output using more decay parameters, say decay<- seq(0,1,0.1) and save them for use. 2. I want the output to be the original data plus new columns of data with the function applied at the different decay rates, with names like "column1_0.9", "column1_0.8", "column2_0.9" etc.

I have tried using another loop with a changing decay rate but can't seem to get it right. I hope this all makes sense and let me know if I need to clarify further.

All the best and thanks!

by JMB at September 17, 2014 02:41 PM

eclipse gives error on start up after adding sbteclipse plugin

I add sbt eclipse plug-in .First I created the project named hello the in project directory i created a file named plugins.sbt and add this line in it

addSbtPlugin("com.typesafe.sbteclipse" % "sbteclipse-plugin" % "2.5.0")

and after the sbt> I typed eclipse then it created the project for eclipse when I open eclipse it gives me error message

An error has occurred. See the error log for more details The org.eclipse.jdt.ui.javaElementFilters plug-in extension "scala.tools.eclipse.javaelements.ScalaElementFilter" specifies a viewer filter class which does not exist. Plug-in org.scala-ide.sdt.core was unable to load class scala.tools.eclipse.javaelements.ScalaElementFilter. An error occurred while automatically activating bundle org.scala-ide.sdt.core (806).

please help me to how to resolve this error and when I imported the sbt project eclipse give me another error

Building workspace has encountered a problem

Errors occurred during the build. Error instantiating builder 'org.scala-ide.sdt.core.scalabuilder'. Plug-in org.scala-ide.sdt.core was unable to load class scala.tools.eclipse.ScalaBuilder. An error occurred while automatically activating bundle org.scala-ide.sdt.core (806). Plug-in org.scala-ide.sdt.core was unable to load class scala.tools.eclipse.ScalaBuilder. An error occurred while automatically activating bundle org.scala-ide.sdt.core (806).

Please help me scala version 2.11.1 sbt version 0.13 I added the scala-IDE plugin from this source http://scala-ide.org/download/current.html i am using eclipse juno and pasted the following location in install new software

http://download.scala-ide.org/sdk/helium/e38/scala211/stable/site

my project compiles successfully in sbt

by user3801239 at September 17, 2014 02:34 PM

Serializing case class with trait mixin using json4s

I've got a case class Game which I have no trouble serializing/deserializing using json4s.

case class Game(name: String,publisher: String,website: String, gameType: GameType.Value)

In my app I use mapperdao as my ORM. Because Game uses a Surrogate Id I do not have id has part of its constructor.

However, when mapperdao returns an entity from the DB it supplies the id of the persisted object using a trait.

Game with SurrogateIntId

The code for the trait is

trait SurrogateIntId extends DeclaredIds[Int]
{
    def id: Int
}

trait DeclaredIds[ID] extends Persisted

trait Persisted
{
    @transient
    private var mapperDaoVM: ValuesMap = null
    @transient
    private var mapperDaoDetails: PersistedDetails = null
private[mapperdao] def mapperDaoPersistedDetails = mapperDaoDetails

private[mapperdao] def mapperDaoValuesMap = mapperDaoVM

private[mapperdao] def mapperDaoInit(vm: ValuesMap, details: PersistedDetails) {
    mapperDaoVM = vm
    mapperDaoDetails = details
}
.....
}

When I try to serialize Game with SurrogateIntId I get empty parenthesis returned, I assume this is because json4s doesn't know how to deal with the attached trait.

I need a way to serialize game with only id added to its properties , and almost as importantly a way to do this for any T with SurrogateIntId as I use these for all of my domain objects.

Can anyone help me out?

by Matt Foxx Duncan at September 17, 2014 02:32 PM

clojure regex named groups

I have a problem with re-find in clojure. Actually I'm doing

(re-find #"-(?<foo>\d+)-(?<bar>\d+)-(?<toto>\d+)-\w{1,4}$" 
"http://www.bar.com/f-a-c-a-a3-spok-ser-2-phse-2-1-6-ti-105-cv-9-31289-824-gu" )

My result is fine:

["-9-31289-824-gt" "9" "31289" "824"]

But I would prefer to have a hash looking like:

{:foo "9" :bar "31289" :toto "824"}

I have understood that java.util.regex.Matcher/group is doing something like that but I haven't been able to use it correctly. Thanks for your help

by Elie Ladias at September 17, 2014 02:32 PM

/r/compilers

/r/scala

/r/netsec

/r/clojure

StackOverflow

Cannot run tests via sbt in Scala

I am trying out a simple HelloWorld example. Here is my directory structure:

hello
  build.sbt
  main
    scala
      Hello.scala
  test
    scala
      HelloTest.scala

Hello.scala contains a sayHello function that I am trying to call from a simple test in HelloTest.scala. Here is my build.sbt:

name := "Hello"

organization := "tycon"

scalaVersion := "2.11.2"

libraryDependencies += "org.scalatest" %% "scalatest" % "2.2.1" % "test"

And here is an sbt run that does not run any tests:

$ sbt
[info] Set current project to scala (in build
file:~/git/scala/hello/main/scala/)
> compile
[info] Updating
{file:~/git/scala/hello/main/scala/}scala...
[info] Resolving org.fusesource.jansi#jansi;1.4 ...
[info] Done updating.
[info] Compiling 1 Scala source to
~/git/scala/hello/main/scala/target/scala-2.10/classes...
[success] Total time: 3 s, completed Sep 17, 2014 9:04:00 AM
> test
[info] Passed: Total 0, Failed 0, Errors 0, Passed 0
[info] No tests to run for test:test
[success] Total time: 0 s, completed Sep 17, 2014 9:04:03 AM

I tried suggestions from other answers: replaced %% with % and substituted scalatest_2.10 for scalatest in libraryDependencies, and changed scalaVersion to 2.10.0. None of them worked. And yes, I was reloading each time build.sbt changed.

I believe that I am missing something very basic. I would appreciate any help. I am new to Scala.

Edit: For the sake of completeness, here are the two scala files:

Hello.scala:

trait Hello {
  def sayHello () = {
    println ("Hello, World!")
  }
}

HelloTest:scala:

import org.scalatest.FunSuite

class HelloTest extends FunSuite with Hello {
  test("say hello") {
    sayHello()
  }
}

Edit2: I changed the directory structure as suggested by ajozwik and Gabriele, but sbt still doesn't run the test:

~/git/scala/hello/src/main/scala$ sbt
[info] Set current project to scala (in build
file:~/git/scala/hello/src/main/scala/)
> test
[info] Passed: Total 0, Failed 0, Errors 0, Passed 0
[info] No tests to run for test:test
[success] Total time: 1 s, completed Sep 17, 2014 9:36:24 AM

by Gowtham Kaki at September 17, 2014 02:17 PM

/r/netsec

Daniel Lemire

The week-end freedom test

In an earlier post, I compared life in academia with life in industry. One recurring argument to favour an academic job over an industry job is freedom.

Of course, the feeling of freedom is fundamentally subjective. And we have a strong incentive to feel free, and to present ourselves as free. Freedom is strongly linked with social status. Telling people that you are free to do whatever you want in your job is signalling your high status.

So how can you tell how much freedom you have?

I have long proposed the retirement freedom test. If you were really free in your job, you would continue it into your retirement. Another test is the lottery ticket test: would you keep your job if you won the lottery? But these tests are, again, somewhat subjective. Most people only retire once and they usually cannot tell ahead of time what retirement will be like.

For something more immediate, more measurable, I propose the week-end test. I conjecture that, given a choice, most people with a family would want to be free on week-ends to spend all their time with their kids. (Admittedly, others might want to dedicate their week-ends to unbridled and continuous kinky sex. But you get my point.)

So anyone who works on week-end fails the week-end freedom test. If you are checking emails from work on week-ends, you fail.

So how do professors do? In my experience, many of them fail the week-end freedom test. Of course, most of the professors I know are in computer science… and a large fraction of them are active researchers. So my sample is not representative. Nevertheless, many professors who claim to love their freedom fail the week-end test miserably. I know because I got emails from them on week-ends.

Of course, there is no arguing with subjective experience. You can fail the week-end test and claim that it is by choice. But what does it mean objectively?

You pity the poor lawyer at a big law firm who has to prepare his files every Saturday instead of playing baseball with his son. But your case is different: you love your job and that is why you work 60 hours a week. Your decision is entirely yours and it has nothing to do with the professional pressure you are feeling. You genuinely enjoy preparing this new research grant on Sunday instead of teaching your kid to swim. Sure, all professors in your department work on week-ends, except this weirdo who will never get promoted (does he love his job?), but they all do it out of love. It is a love that is so powerful that it beats the alternatives (such as spending time with your kids, or with your sex partner).

Appendix: I pass the week-end test. Mostly. For the last few years, I have stopped checking emails on week-ends. But I fail the retirement and lottery tests.

by Daniel Lemire at September 17, 2014 02:10 PM

StackOverflow

Extension method to approximate "using expressions" [on hold]

C# provides the using statement, for writing code like this:

    ClassX x;
    using (var y = ClassY.Create()) x = GetXFromY(y);

I sometimes wish there were also using expressions, so I could instead just write:

    var x = using(var y = ClassY.Create()) GetXFromY(y);

What I can do is write the following extension method:

    public static R Using<D, R>(this D that, Func<D, R> funcUsingThat)
        where D : IDisposable
    {
        using (that)
        {
            return funcUsingThat(that);
        }
    }

And this allows me to write:

    var x = ClassY.Create().Using(GetXFromY);

It's clear that this is either a great idea or a terrible idea, but which one is it?

Edit: To make this less opinion based: Does this have any concrete problems? Is there any concrete evidence that this is a generally useful construct?

by Peter at September 17, 2014 02:10 PM

Simple string manipulation in Scala

Suppose I need a function stripBang(s: String): Option[String]:

  • if s is either null or empty return None
  • if s(0) == '!' return None
  • otherwise return Some(s.tail)

I am writing the function as follows:

 def stripBang(s: String) = Option(s).flatMap(PartialFunction.condOpt(_) { 
    case s if s.nonEmpty && s(0) == '!' => s.tail
 })

It seems to work but looks clumsy. How would you suggest improve it?

by Michael at September 17, 2014 02:05 PM

QuantOverflow

How to combine Gaussian marginals with Gaussian copula to obtain multivariate normals?

in the book "Numerical Methods and Optimization in Finance" I red the following: "Combining the Gaussian copula with Gaussian marginal gives a fancy way of expressing multivariate normals. However, the Gaussian copula can also be combined with other marginals, and Gaussian marginals can be linked via any copula”.

I would like to combine the Gaussian copula with Gaussian marginals, to obtain multivariate normals for my 7 asset classes. In addition, I would like to combine t-marginals with t-copula, to obtain a multivariate t-distribution. Does anyone know how to do this in MatLab?? I kinda struggle with this for quite some time!

This is how I approached the problem for the t marginals & t copula:


%% Define univariate process by t-distribution

for i = 1:nAssets

marginal{i} = fitdist(returns(:,i),'tlocationscale');

end

%% Copula calibration

for i = 1:nAssets

U(:,i) = marginal{i}.cdf(returns(:,i)); % transform margin to uniform

end

[rhoT, DoF] = copulafit('t', U, 'Method', 'ApproximateML');

%% Reverse transformation on each index

U = copularnd('t', rhoT, DoF, NumObs * NumSim);

for j = 1:nAssets

ExpReturns(:,:,j) = reshape(marginal{j}.icdf(U(:,j), DoF), NumObs, NumSim);

end


Does my approach make sense?? Any help is very much appreciated, especially on the MatLab code!!!

Best regards

by Peter Miller at September 17, 2014 02:02 PM

Fefe

StackOverflow

Getting shapeless records with specific keys to compile

I'm wondering why the following code fails to compile:

package app.models.world

import java.util.UUID

import shapeless._
import shapeless.ops.record.Selector
import shapeless.record._

case class Vect2(x: Int, y: Int)
case class Bounds(position: Vect2, size: Vect2)

object WObject {
  type Id = UUID
  def newId: Id = UUID.randomUUID()

  object Fields {
    object id extends FieldOf[Id]
    object position extends FieldOf[Vect2]

    case class all[L <: HList](implicit
      _id: Selector[L, id.type], _position: Selector[L, position.type]
    )
    object all {
      implicit def make[L <: HList](implicit
        _id: Selector[L, id.type],
        _position: Selector[L, position.type]
      ) = all[L]
    }
  }
}
import WObject._

/* World object */
abstract class WObject[L <: HList](val props: L)(implicit sel: Fields.all[L]) {
  def this(id: WObject.Id, position: Vect2) =
    this(Fields.id ->> id :: Fields.position ->> position :: HNil)

  def id = props(Fields.id)
  def position = props(Fields.position)

  lazy val bounds = Bounds(position, Vect2(1, 1))
}

I'm getting following errors:

[error] D:\work\scala\shapeworld\src\main\scala\app\models\WObject.scala:39: No field app.models.world.WObject.Fields.position.type in record L
[error]   def position = props(Fields.position)
[error]                       ^
[error] D:\work\scala\shapeworld\src\main\scala\app\models\WObject.scala:36: type mismatch;
[error]  found   : shapeless.::[shapeless.record.FieldType[app.models.world.WObject.Fields.id.type,app.models.world.WObject.Id],shapeless.::[shapeless.record.FieldType[app.models.world.WObject.Fields.position.type,app.models.world.Vect2],shapeless.HNil]]
[error]     (which expands to)  shapeless.::[java.util.UUID with shapeless.record.KeyTag[app.models.world.WObject.Fields.id.type,java.util.UUID],shapeless.::[app.models.world.Vect2 with shapeless.record.KeyTag[app.models.world.WObject.Fields.position.type,app.models.world.Vect2],shapeless.HNil]]
[error]  required: L
[error]     this(Fields.id ->> id :: Fields.position ->> position :: HNil)
[error]                           ^
[error] D:\work\scala\shapeworld\src\main\scala\app\models\WObject.scala:36: could not find implicit value for parameter sel: app.models.world.WObject.Fields.all[L]
[error]     this(Fields.id ->> id :: Fields.position ->> position :: HNil)
[error]     ^
[error] D:\work\scala\shapeworld\src\main\scala\app\models\WObject.scala:38: No field app.models.world.WObject.Fields.id.type in record L
[error]   def id = props(Fields.id)
[error]                 ^
[error] four errors found
[error] (compile:compile) Compilation failed
[error] Total time: 3 s, completed 2014-09-17 13.29.15

Which are weird.

1) Why does position insist that there is no field if the selector should imply that? I took my reasoning from Passing a Shapeless Extensible Record to a Function (continued)

2) Why does the secondary constructor fail?

The SBT project (56kb) can be found at https://www.dropbox.com/s/pjsn58dpqx1l4os/shapeworld.zip?dl=0

by arturaz at September 17, 2014 01:54 PM

how to avoid nesting in clojure

when my write a function to check a user can delete a post by clojure,I get this

(defn delete!
  {:arglists}
  [^String id]
  (if (valid-number? id)
   (let [result {:code 200 :status "error" :messag "delete success"}]
     (if-let [user (session/get :userid)]
       (if-let [post (pdb/id id)]
         (if (= user (post :user_id))
           (do
             (pdb/delete! (Long/valueOf id))
             (assoc result :status "ok"))
           (assoc result :message (emsg :not-own)))
         (assoc result :message (emsg :post-id-error))))
     (assoc result :message (emsg :not-login)))))

so i want to fix it,i get this

https://github.com/4clojure/4clojure/blob/develop/src/foreclojure/register.clj#L27

https://github.com/4clojure/4clojure/blob/develop/src/foreclojure/utils.clj#L32 but it is line,but not a nest.

the delete! function is nest ugly and it is very hard to understand it,how to write a macro to avoid the nesting a lot.or other way to avoid it.

by ipaomian at September 17, 2014 01:51 PM

/r/emacs

StackOverflow

Gatling record JSessionID from URL

I'm the Sysadmin for a small company and I'm trying to set up a first test with gatling for one of our Webapps. I know a bit of C and Java Syntax as well as Regexes, but no Scala whatsoever.

The app I'm trying to test has the jsessionid (including a jvmRoute) in the URL, not set in a cookie. According to what Stéphane Landelle wrote here, Gatling is supposed to automagically record the jsessionid per user session and replay it, but that appears to only work when the jsessionid is set as a Cookie.

I deleted the actual recorded jsessionid from the URLs in the test case, reasoning that it would not be valid on any future attempts. When i run the test, the Appserver generates a new jsessionid, which is then not included in any future calls.

Because of this, i'm trying to scrape the jsessionid from the initial redirect and include it in any future URL. There is a Location Header in the first response that looks like this:

Location    https://URL/welcome.do;jsessionid=F97250BDC1576B5766CEFA56645EA3F4.node1

The code currently looks like this:

    .exec(http("Open Page")
      .get("""/?code=abcdef""")
      .headers(headers_0)
 // Test extract jsessionid from URL
       .check(headerRegex("Location", "jsessionid=(.*)")).saveAs("jsess")

    .exec(http("welcome.do")
      .post("""/welcome.do;jsessionid=${jsess}""")

...and it doesn't compile.

12:15:14.198 [ERROR] i.g.a.ZincCompiler$ - FirstTest.scala:53: value saveAs is not a member of io.gatling.http.request.builder.HttpRequestBuilder
12:15:14.199 [ERROR] i.g.a.ZincCompiler$ -          .check(headerRegex("Location", "jsessionid=(.*)")).saveAs("jsess")
12:15:14.200 [ERROR] i.g.a.ZincCompiler$ -                                                             ^
12:15:14.261 [ERROR] i.g.a.ZincCompiler$ - one error found

If i move one closing bracket to the end:

  .check(headerRegex("Location", "jsessionid=(.*)").saveAs("jsess"))

it compiles but does not do what's desired:

---- Errors --------------------------------------------------------------------
> No attribute named 'jsess' is defined                              11 (78.57%)
> status.in(200,304,201,202,203,204,205,206,207,208,209), but ac      2 (14.29%)
tually found 404
> headerRegex((Location,jsessionid=(.*))).exists, found nothing       1 ( 7.14%)
================================================================================

So, how do i record the jsessionid in order to reuse it? Or am I doing the completely wrong thing here? Any help is appreciated.

by Christoph Gösgens at September 17, 2014 01:36 PM

How to create a circular stream?

I'm trying to create a circular process using scalaz-stream by merging one source of data with a filtered version coming from the same data source. Here is a simple example of what I have so far :

val s1 = Process.emitAll(1 to 10).toSource

val w = wye.merge[Int]

val s2 = w.filter(_ < 5)

val w2 = s1.wye(s2)(w)

But it doesn't compile as s2 is a Process[Process.Env[Int,Int]#Y,Int] but needs to be a Process[Task,Int].

How can I specify that s2 is both the input (with s1) and the output of w?

by synapski at September 17, 2014 01:36 PM

TheoryOverflow

Is scheduling a set of tasks on single machine in P or in NP?

Given a set of tasks $T=\{t_1,\dots,t_n\}$ and execution times between the tasks $e(t_i,t_j)$ can we find a schedule $s$ for $T$ on a single machine with makespan $m_s < d$? Assume that the execution times, $d$ and $m_s$ are arbitrary non-negative integers. The execution time of a task $t_j$ depends on the task the that was executed immediately before i.e. $t_i$ . Thus execution times play crucial role in determining the shortest schedule.

Do you any similar problem which is in P or NP?

by Umar at September 17, 2014 01:32 PM

CompsciOverflow

Cutting equal sticks from different sticks

You have $n$ sticks of arbitrary lengths, not necessarily integral.

By cutting some sticks, you want to get $k<n$ sticks such that:

  • All $k$ sticks have the same length;
  • All $k$ sticks are at least as long as the other sticks.

What algorithm would you use such that the number of cuts is minimal? And what is that number?

As an example, take $k=2$ and any $n\geq 2$. The following algorithm can be used:

  • Order the sticks by descending order of length such that $L_1\geq L_2 \geq \ldots \geq L_n$.
  • If $L_1\geq 2 L_2$ then cut stick #1 to two equal pieces. There are now two sticks of length $L_1 / 2$, which are at least as long as the remaining sticks $2 \ldots n$.
  • Otherwise ($L_1 < 2 L_2$), cut stick #1 to two unequal pieces of sizes $L_2$ and $L_1-L_2$. There are now two sticks of length $L_2$, which is longer than $L_1-L_2$ and the other sticks $3 \ldots n$.

In both cases, a single cut is sufficient.

I tried to generalize this to larger $k$, but there seem to be a lot of cases to consider. Can you find an elegant solution?

by Erel Segal Halevi at September 17, 2014 01:30 PM

Lobsters

StackOverflow

How to not apply a function when all parameters are given

Give the following scenario:

def add(a: Int, b: Int): Int = a + b
def f1(adder: () => Int) = adder()

f1(add(1,2) _) // Does **NOT** compile, because add seems to be already executed
f1(() => add(1,2)) // This works, but seems to be ugly

Is there any way to make it work with the underscore?

by regexp at September 17, 2014 01:20 PM

QuantOverflow

Capital Allocation for Portfolio of Multi-Strategy and Multi-Instrument

I would like to know if there is a way (or theory) to manage a multi-strategy, multi-instruments portfolio that would calculate the optimal weight to allocate capital for each combination of strategy and instrument (sometimes we may find one strategy works for many instrument or vice versa).

My first idea is that we can treat each combination of strategy and instrument as a imaginary instrument and introduce Markowitz's portfolio theory to find the optimal weight.

However, I also learned that the estimated return and covariance is very noisy in practice and deduce very different results from CAPM. May not be a ideal way. Another problem is that my strategies could be various across types and timeframe (from intra-day to month holding times). Estimating average return on daily basis could be misleading and underestimate return of strategies that works infrequently (for instance, strategies that take advantage on annual earning announcement or monthly FOMC meeting).

I checked Kelly formula and found the answer from it is exactly as Markowitz's theory. Thus, most issues on mean-variance theory (e.g. noise of estimation for mean and variance) applies here.

I wonder if anyone can share some thoughts on this issue. Any idea/example?

Many thanks.

by user3284048 at September 17, 2014 01:10 PM

StackOverflow

Explain Kinesis Shard Iterator - AWS Java SDK

OK, I'll start with an elaborated use-case and will explain my question:

  1. I use a 3rd party web analytics platform which utilizes AWS Scala Kinesis streams in order to pass data from the client into the final destination - a Kinesis stream;
  2. The web analytics platform uses 2 streams:
    1. A data collector stream (single shard stream);
    2. A second stream to enrich the raw data from the collector stream (single shard stream); Most importantly, this stream consumes the raw data from the first stream using TRIM_HORIZON iterator type;
  3. I consume the data from the stream using AWS Java SDK, secifically using the GetShardIteratorRequest class;
  4. I'm currently developing the extraction class, so this is done synchronously, meaning I consume data only when I compile my class;
  5. The class surprisingly works, although there are some things that I fail to understand, specifically with respect to how the data is consumed from the stream and the meaning of each one of iterator types;

My problem is that the data I retrieve is inconsistent and has no chronological logic in it.

  • When I use AT_SEQUENCE_NUMBER and provide the first sequence number from the shard with

    .getSequenceNumberRange().getStartingSequenceNumber();

    ... as the ``, I'm not getting all records. Similarly, AFTER_SEQUENCE_NUMBER;

  • When I use LATEST, I'm getting zero results;
  • When I use TRIM_HORIZON, which should make sense to use, it doesn't seem to be working fine. It used to provide me the data, and then I've added new "events" (records to the final stream) and I received zero records. Mystery.

My questions are:

  1. How can I safely consume data from the stream, without having to worry about missed records?
  2. Is there an alternative to the ShardIteratorRequest?
  3. If there is, how can I just "browse" the stream and see what's inside it for debugging references?
  4. What am I missing with the TRIM_HORIZON method?

Thanks in advance, I'd really love to learn a bit more about data consumption from a Kinesis stream.

by YuvalHerziger at September 17, 2014 01:04 PM

Dave Winer

Chrome is broken

Yesterday I finally got so fed up with breakage in the clipboard and debugger in Chrome that I went public with that frustration. I guess it was possible that it was "just me," but it was confirmed by other users. These crucial features, one for programmers, and the other for everyone, are broken. Sometimes you can clear the problem by restarting the browser. Other times, even that won't do.

There apparently is a workaround for the debugger problem.

Remember when Chrome launched? We were told it was an inherently more reliable design because each tab was in its own thread, so you could have one thread go down and it wouldn't take the browser with it (an infuriating feature of Safari on iOS, btw, it's the crashiest browser I've ever used).

As with many products, they devoted the resources to make it work when they wanted to take the market. Their best programmers, with lots of focus -- in this case, Firefox, I guess. They win, and then we, the users, deal with the same old breakage, and no one home to fix it. (Firefox was no better, their disregard for stability was the reason I split.)

This is a lot like what Comcast et al do with connectivity. They have a monopoly, so why should they care. Or how Microsoft blew it with Internet Explorer.

Computer users tend to think crashes are their fault, they're doing something wrong, so they live with broken tools. It would be great if the people at Google had enough pride to keep their browser functioning anyway. I can't imagine they accept that features like the clipboard and debugger are broken. Are they broken in the versions they use?

Also it has been suggested that I switch to Canary, the "bleeding-edge" (Google's term) version of Chrome. That seems like very bad advice. If the "stable" version is this badly broken, why would you expect users of a browser named after a dead bird, one that died in an experiment, to fare any better.

One more thing: The horde of reporters is around for the launch, with universal praise for the new king of the hill. They're almost never around to report the messes that are left behind after a product achieves market dominance.

September 17, 2014 01:02 PM

DataTau

/r/clojure

/r/netsec

CompsciOverflow

Decidability of empty intersection of two languages accepted by Turing machines

I am really struggling with determining the decidability of languages and cant figure out whether this problem is decidable or not.

I have a language

$\qquad\displaystyle L = \{ (R(M_1), R(M_2)) \mid L(M_1) \cap L(M_2) = \emptyset \}$,

where $R(M_1)$ and $R(M_2)$ are representations of Turing machines $M_1$, resp $M_2$ and $L(M_1)$, $L(M_2)$ are the languages accepted by these machines.

Is language $L$ a decidable language?

I have found this theorem: It is undecidable whether or not the languages generated by two given context-free grammars have an empty intersection. (but I dont know whether $L(M_1)$ and $L(M_2)$ are context-free, I only know that they are accepted by some machines, so I dont know if I can use this theorem).

I think that this problem is undecidable and my attempt to prove this would go like this:

In order for this language to be decidable. I would have to build a Turing machine that tests whether an arbitrary word is accepted by $M_1$ and not $M_2$ (and vice versa) but I cannot guarantee that it will halt for all inputs (since language acceptence does not guarantee that the language is decidable) so it proves the undecidability.

Is this correct approach?

Is $L$ at least recursively enumarable?

by Smajl at September 17, 2014 12:41 PM

Lobsters

Aspects for Testing Connectivity of Mobile Game

Some considerations where to focus when testing the connectivity aspects of mobile game

Comments

by vvh at September 17, 2014 12:36 PM

StackOverflow

scala project, it gives me red error lines under def

scala project, it gives me red error lines under def.

package ass1

object Main extends App {

}


def pascal(c: Int, r: Int): Int = {
  if (c == 0 || c == r) 1
  else pascal(c - 1, r - 1) + pascal(c, r - 1)
}   

and error is scala multiple markers at this line expected class or object definition

I don't understand why. need help. thanks

by No Name at September 17, 2014 12:36 PM

Scala: type inference of generic and it's type argument

Lets assume I have instance of arbitrary one-argument generic class (I'll use List in demonstration but this can me any other generic).

I'd like to write generic function that can take instances (c) and be able to understand what generic class (A) and what type argument (B) produced the class (C) of that instance.

I've come up with something like this (body of the function is not really relevant but demonstrates that C conforms to A[B]):

def foo[C <: A[B], A[_], B](c: C) {
  val x: A[B] = c
}

... and it compiles if you invoke it like this:

foo[List[Int], List, Int](List.empty[Int])

... but compilation fails with error if I omit explicit type arguments and rely on inference:

foo(List.empty[Int])

The error I get is:

    Error:Error:line (125)inferred kinds of the type arguments (List[Int],List[Int],Nothing) do not conform to the expected kinds of the type parameters (type C,type A,type B).
List[Int]'s type parameters do not match type A's expected parameters:
class List has one type parameter, but type A has one
  foo(List.empty[Int])
  ^
    Error:Error:line (125)type mismatch;
 found   : List[Int]
 required: C
  foo(List.empty[Int])
                ^

As you can see Scala's type inference cannot infer the types correctly in this case (seems like it's guess is List[Int] instead of List for 2nd argument and Nothing instead of Int for 3rd).

I assume that type bounds for foo I've come up with are not precise/correct enough, so my question is how could I implement it, so Scala could infer arguments?

Note: if it helps, the assumption that all potential generics (As) inherit/conform some common ancestor can be made. For example, that A can be any collection inherited from Seq.

Note: the example described in this question is synthetic and is a distilled part of the bigger problem I am trying to solve.

by Eugeny Loy at September 17, 2014 12:29 PM

Scala implicit definition: how to locate it?

I designed a class which takes an implicit engineProvider: ClientSSLEngineProvider as a constructor parameter. When I instantiate it, I don't have any implicit definition of such type anywhere within my source file. But the code still compiles without any errors, and when I use debugger, I can see that this parameter is initialized with some value. It looks like this implicit is defined somewhere else (in one of the imports). The question is: how can I locate the exact place where it is defined? I'm using IDEA for development, if it does matter.

by Uniqus at September 17, 2014 12:04 PM

Planet Clojure

4 Features Javascript can steal from Clojure(Script)

ClojureScript adds a lot of value on top of the Javascript platform. But some of that value can be had directly in Javascript without using a new language.

<λ>

by LispCast at September 17, 2014 11:52 AM

StackOverflow

Always `Could not bind to tcp://my_ip_here:8080 Address already in use`

I was trying to deploy my websocket server and start running it but always gives:

PHP Fatal error:
Uncaught exception 'React\Socket\ConnectionException'
with message       'Could not bind to tcp://my_ip_here:8080:
                    Address already in use'
in                 /var/www/html/webscoket/vendor/react/socket/src/Server.php:29

here's my server.php:

<?php
require dirname(__DIR__) . '/vendor/autoload.php';

use Ratchet\Server\IoServer;
use Ratchet\Http\HttpServer;
use Ratchet\WebSocket\WsServer;
use React\Socket\Server;
use React\ZMQ\Context;

$loop   = React\EventLoop\Factory::create();
$app    = new onyxsocket();
$webSock = new Server($loop);
$webSock->listen(8080, 'my_ip_here');
$webServer = new IoServer(
    new HttpServer(
        new WsServer(
            $app
        )
    ),
    $webSock
);

$context = new Context($loop);
$pull = $context->getSocket(ZMQ::SOCKET_PULL);
$pull->bind('tcp://my_ip_here:5555');
$pull->on('error', function ($e) {
    var_dump($e->getMessage());
});
$pull->on('message', array($app, 'onbroadcast'));
$loop->run();

What I've tried so far is to check available ports that can be used in the production server: netstat - anp gave me idea that port 8080 is free. But the problem is it still show the error Address already in use. I also tried other ports given by the administrator but no luck.

The server.php that I'm trying to deploy is working fine on localhost. But I don't know what do I need to do to make it work on a production server.

Need help. Thanks.

by Vainglory07 at September 17, 2014 11:36 AM

CompsciOverflow

Algorithm for finding best combination of elements

Say I have a very large, arbitrary number of variables, each of which I can assign to be type A, B, or C.

The types come with expenses: Type A's are the least expensive, and C's are the most expensive, but their expense varies from variable to variable.
For example {A, C, C} may actually be more expensive than {C, A, A} if that first variable happens to be worth more, but we won't know how much the total cost is until we assign the types and run the program.

Also we don't want to go below a certain given total expense (can't make all A's), but get as close as possible to it.

I am trying to minimize the total expense while staying above the threshold. The search space (permutations of variable types) is too large to try all combinations.

Someone recommended sampling via Monte Carlo method earlier.
How can the Monte Carlo method, or Markov-Chain Monte Carlo, be applied to such a problem to find the optimal combination of parameters?
Could a genetic algorithm be used effectively?

by ginsunuva at September 17, 2014 11:33 AM

/r/emacs

Fefe

Das Lex Edathy ist draußen.Künftig soll auch strafbar ...

Das Lex Edathy ist draußen.
Künftig soll auch strafbar sein, wer nur "unbefugt" Fotos eines nackten Kindes "herstellt" oder "verbreitet" - ohne, dass das Kind irgendwie posieren müsste.
Und hier ist, was sie gegen "Cybermobbing" zu tun versuchen:
Künftig soll es strafbar sein, unbefugt Fotos herzustellen oder zu verbreiten, die "dem Ansehen der Person erheblichen Schaden" zufügen
Mit anderen Worten: Fotos von Polizisten beim Prügeln sind ab jetzt verboten. Schon das Herstellen.

September 17, 2014 11:02 AM

/r/compsci

StackOverflow

Scala function that takes a multiple parameter anonymous function, as a parameter

I have a function named sum that can take a single parameter anonymous function as a parameter, and two Integers.

def sum (f: Int => Int , a: Int, b: Int): Int =
{
  if(a > b) 0 else f(a) + sum(f, a + 1, b)
}
  sum((x: Int) => x , 2, 10)

How could I modify the function definition so that it can take a multiple parameter function, so I could call it like this:

sum((y: Int, i: Int) => y + i => x , 2, 10)

I know the function I have supplied would be pretty useless when passed a multiple parameter function.. but I am just looking for how it can be done..

Thanks

by Geem7n at September 17, 2014 10:46 AM

zeromq multithreading, How threads receiving messages from REP socket?

MTthreading, threads in worker routine, recieve message through REP socket.Isnt REP socket supposed to send messages in zeroMQ? I am new at zeromq

by monsterrrrr at September 17, 2014 10:42 AM

Why am I getting memory quota exceeded errors on Heroku even while unused? (play2/scala)

Just sitting with no requests, I get memory keeps on increasing, eventually to memory quota exceeded. There are multiple things I don't understand.

  1. Why memory use keeps increasing when no requests?
  2. Where does the value of "Process running mem" come from (does not seem to be a sum of any combination of numbers from the Heap and Non-Heap usage that I can tell).
  3. Why does it exceed even though I have the recommended JAVA_OPTS: -Xmx384m -Xss512k -XX:+UseCompressedOops -javaagent:heroku-javaagent-1.2.jar=stdout=true?

Here is a sample of the log file

app/web.1:  heroku-javaagent: JVM Memory Usage     (Heap): used: 275M committed: 349M max:349M 
app/web.1:  heroku-javaagent: JVM Memory Usage (Non-Heap): used: 37M committed: 37M max:219M 
app/web.1:  heroku-javaagent: JVM Threads                : total: 37 daemon: 7 non-daemon: 21 internal: 9 
app/web.1:  heroku-javaagent: JVM Memory Usage     (Heap): used: 276M committed: 349M max:349M 
app/web.1:  heroku-javaagent: JVM Memory Usage (Non-Heap): used: 37M committed: 37M max:219M 
app/web.1:  heroku-javaagent: JVM Threads                : total: 37 daemon: 7 non-daemon: 21 internal: 9 
app/web.1:  heroku-javaagent: JVM Memory Usage     (Heap): used: 277M committed: 349M max:349M 
app/web.1:  heroku-javaagent: JVM Memory Usage (Non-Heap): used: 37M committed: 37M max:219M 
app/web.1:  heroku-javaagent: JVM Threads                : total: 37 daemon: 7 non-daemon: 21 internal: 9 
app/web.1:  heroku-javaagent: JVM Memory Usage     (Heap): used: 277M committed: 349M max:349M 
app/web.1:  heroku-javaagent: JVM Memory Usage (Non-Heap): used: 37M committed: 37M max:219M 
app/web.1:  heroku-javaagent: JVM Threads                : total: 37 daemon: 7 non-daemon: 21 internal: 9 
app/web.1:  heroku-javaagent: JVM Memory Usage     (Heap): used: 278M committed: 349M max:349M 
app/web.1:  heroku-javaagent: JVM Memory Usage (Non-Heap): used: 37M committed: 37M max:219M 
app/web.1:  heroku-javaagent: JVM Threads                : total: 37 daemon: 7 non-daemon: 21 internal: 9 
heroku/web.1:  Process running mem=517M(101.1%) 
heroku/web.1:  Error R14 (Memory quota exceeded) 
heroku/web.1:  Process running mem=517M(101.1%) 
heroku/web.1:  Error R14 (Memory quota exceeded) 
heroku/web.1:  Process running mem=517M(101.1%) 
heroku/web.1:  Error R14 (Memory quota exceeded) 
app/web.1:  heroku-javaagent: JVM Memory Usage     (Heap): used: 212M committed: 349M max:349M 
app/web.1:  heroku-javaagent: JVM Memory Usage (Non-Heap): used: 37M committed: 37M max:219M 
app/web.1:  heroku-javaagent: JVM Threads                : total: 37 daemon: 7 non-daemon: 21 internal: 9 
heroku/web.1:  Process running mem=517M(101.1%) 
heroku/web.1:  Error R14 (Memory quota exceeded) 
heroku/web.1:  Process running mem=517M(101.1%) 
heroku/web.1:  Error R14 (Memory quota exceeded) 
heroku/web.1:  Process running mem=517M(101.1%) 
heroku/web.1:  Error R14 (Memory quota exceeded) 
app/web.1:  heroku-javaagent: JVM Memory Usage     (Heap): used: 213M committed: 349M max:349M 
app/web.1:  heroku-javaagent: JVM Memory Usage (Non-Heap): used: 37M committed: 37M max:219M 
app/web.1:  heroku-javaagent: JVM Threads                : total: 37 daemon: 7 non-daemon: 21 internal: 9 
heroku/web.1:  Process running mem=517M(101.1%) 
heroku/web.1:  Error R14 (Memory quota exceeded) 
heroku/web.1:  Process running mem=517M(101.1%) 
heroku/web.1:  Error R14 (Memory quota exceeded) 
heroku/web.1:  Process running mem=517M(101.1%) 
heroku/web.1:  Error R14 (Memory quota exceeded) 
app/web.1:  heroku-javaagent: JVM Memory Usage     (Heap): used: 213M committed: 349M max:349M 
app/web.1:  heroku-javaagent: JVM Memory Usage (Non-Heap): used: 37M committed: 37M max:219M 
app/web.1:  heroku-javaagent: JVM Threads                : total: 37 daemon: 7 non-daemon: 21 internal: 9 
heroku/web.1:  Process running mem=517M(101.1%) 
heroku/web.1:  Error R14 (Memory quota exceeded) 
heroku/web.1:  Process running mem=517M(101.1%) 
heroku/web.1:  Error R14 (Memory quota exceeded) 
heroku/web.1:  Process running mem=517M(101.1%) 
heroku/web.1:  Error R14 (Memory quota exceeded) 
app/web.1:  heroku-javaagent: JVM Memory Usage     (Heap): used: 214M committed: 349M max:349M 
app/web.1:  heroku-javaagent: JVM Memory Usage (Non-Heap): used: 37M committed: 37M max:219M 
app/web.1:  heroku-javaagent: JVM Threads                : total: 37 daemon: 7 non-daemon: 21 internal: 9 
heroku/web.1:  Process running mem=517M(101.1%) 
heroku/web.1:  Error R14 (Memory quota exceeded) 
heroku/web.1:  Process running mem=517M(101.1%) 
heroku/web.1:  Error R14 (Memory quota exceeded) 
heroku/web.1:  Process running mem=517M(101.1%) 
heroku/web.1:  Error R14 (Memory quota exceeded) 

by Dax Fohl at September 17, 2014 10:41 AM

Would it be possible to add type inference to the C language?

Let's say, we create a reimplementation of C, with the only difference being that types are inferred. Storage classes and modifiers would still need to be given (const, static, restrict etc), and let's restrict our attention to single file C programs for the moment. Could it be done? What are the major impediments?

Some thoughts on what might cause problems with type inference

  • structs with the same field name would need to be disambiguated manually
  • same for unions with the same field names
  • casts would probably need a "from" annotation, something like

    var i = (uint32_t -> uint64_t) *some_pointer;
    

These problems would require a bit of user annotation, but shouldn't be too burdensome, is there some killer issue that blows this idea out of the water?

Edit: To clarify, I'm not talking about adding generics or parametric polymorphism, just type inference for existing C types.

Edit 2014: Anyone interested in this concept may want to look into Rust

by deontologician at September 17, 2014 10:21 AM

Planet Theory

Call for nominations: Presburger Award 2015

Here is the call for nominations for one of the EATCS awards that is closest to my heart. Do put pen to paper and nominate your favourite young TCS researcher! He/She might join a truly impressive list of previous award recipients.
 
Presburger Award for Young Scientists 2015

   Call for Nominations

   Deadline: December 31st, 2014

Starting in 2010, the European Association for Theoretical Computer Science (EATCS) established the Presburger Award. The Award is conferred annually at the International Colloquium on Automata, Languages and Programming (ICALP) to a young scientist (in exceptional cases to several young scientists) for outstanding contributions in theoretical computer science, documented by a published paper or a series of published papers. The Award is named after Mojzesz Presburger who accomplished his path-breaking work on decidability of the theory of addition (which today is called Presburger arithmetic) as a student in 1929.

Nominations for the Presburger Award can be submitted by any member or group of members of the theoretical computer science community except the nominee and his/her advisors for the master thesis and the doctoral dissertation. Nominated scientists have to be at most 35 years at the time of the deadline of nomination (i.e., for the Presburger Award of 2015 the date of birth should be in 1979 or later). The Presburger Award Committee of 2015 consists of Zoltan Esik (Szeged), Claire Mathieu (Paris), and Peter Widmayer (Zürich, chair).

Nominations, consisting of a two page justification and (links to) the respective papers, as well as additional supporting letters, should be sent by e-mail to:

   Peter Widmayer
   widmayer@inf.ethz.ch

The subject line of every nomination should start with Presburger Award 2015, and the message must be received before December 31st, 2014.

The award includes an amount of 1000 Euro and an invitation to ICALP 2015 for a lecture.

Previous Winners:
  • Mikołaj Bojanczyk, 2010
  • Patricia Bouyer-Decitre, 2011
  • Venkatesan Guruswami and Mihai Patrascu, 2012
  • Erik Demaine, 2013
  • David Woodruff, 2014
Official website: http://www.eatcs.org/index.php/presburger

by Luca Aceto (noreply@blogger.com) at September 17, 2014 10:05 AM

StackOverflow

ZeroMQ producing meager results

I am testing out ZeroMQ and I am only getting around 1227 - 1276 messages per second. I have read however that these are supposed to be over 100x this amount.

What am I doing wrong? Is there some configuration I can specify to fix this?

I am using the following functionality:

public static final String SERVER_LOCATION = "127.0.0.1";
public static final int SERVER_BIND_PORT = 5570;

public static void receiveMessages() throws InvalidProtocolBufferException, FileNotFoundException, UnsupportedEncodingException{
    ZContext ctx = new ZContext();

    Socket frontend = ctx.createSocket(ZMQ.PULL);
    frontend.bind("tcp://*:"+SERVER_BIND_PORT);

    int i = 1;
    do{
        ZMsg msg = ZMsg.recvMsg(frontend);
        ZFrame content = msg.pop();
        if(content!= null){
            msg.destroy();
            System.out.println("Received: "+i);
            i++;
            content.destroy();
        }
    }while(true);
}

public static void sendMessages() throws FileNotFoundException, UnsupportedEncodingException{
    ZContext ctx = new ZContext();
    Socket client = ctx.createSocket(ZMQ.PUSH);

    client.setIdentity("i".getBytes());
    client.connect("tcp://"+SERVER_LOCATION+":"+SERVER_BIND_PORT);

    PollItem[] items = new PollItem[] { new PollItem(client, Poller.POLLIN) };
        int i = 1;
        Timer t = new Timer(timeToSpendSending);
        t.start();
       do{
           client.send(/* object to send*/ , 0);
           i++;
       }while(!t.isDone());

       System.out.println("Done with "+i);
}

Timer class used to limit time the program runs for:

class Timer extends Thread{
    int time;
    boolean done;
    public Timer(int time){
        this.time = time;
        done = false;
    }
    public void run(){
        try {
            this.sleep(time);
            done = true;
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
    }
    public boolean isDone(){
        return done;
    }
}

Edit: I am using jeroMQ

<dependency>
    <groupId>org.zeromq</groupId>
    <artifactId>jeromq</artifactId>
    <version>0.3.4</version>
</dependency>

by mangusbrother at September 17, 2014 10:02 AM

Fefe

Kärntener Verkehrspolizist tappt in Gülle-Sprengfalle.Die ...

Kärntener Verkehrspolizist tappt in Gülle-Sprengfalle.
Die Falle, die aus einer Sprengladung und einem mit Gülle gefüllten Kübel bestand, war an einer Stelle platziert, wo regelmäßig Radarmessungen durchgeführt werden.

September 17, 2014 10:02 AM

Kurze Durchsage aus dem Wirtschaftsteil der FAZ (nein, ...

Kurze Durchsage aus dem Wirtschaftsteil der FAZ (nein, wirklich!):
In den fünfziger Jahren war das Kapital noch gezähmt. Das brachte Wohlstand und sozialen Fortschritt.
Ob das was hiermit zu tun hat? Plötzlich doch nicht mehr so dogmatisch?

September 17, 2014 10:02 AM

QuantOverflow

Why does the price of a convertible bond go up if the CDS spread goes up?

Looking at convertible bond prices in a commercial pricing tool, which is based on a model of Black-Scholes volatility plus a Poisson process of jump to default, I noticed that increasing the spread for CDS on the issuer causes the fair price of the convertible bond to go up.

Since increased credit risk suggests that there is more chance that the bond will be defaulted on, why should it make the bond more valuable?

by jwg at September 17, 2014 09:59 AM

StackOverflow

Why does sbt report "value enablePlugins is not a member of sbt.Project" in Play project?

I've just been getting the following error when trying to compile any Play applications:

error: value enablePlugins is not a member of sbt.Project
lazy val root = (project in file(".")).enablePlugins(PlayScala)
                                       ^
sbt.compiler.EvalException: Type error in expression
    at sbt.compiler.Eval.checkError(Eval.scala:343)
    at sbt.compiler.Eval.compileAndLoad(Eval.scala:165)
    at sbt.compiler.Eval.evalCommon(Eval.scala:135)
    at sbt.compiler.Eval.evalDefinitions(Eval.scala:109)
     ...
     ...
   sbt.compiler.EvalException: Type error in expression
Use 'last' for the full log.
Project loading failed: (r)etry, (q)uit, (l)ast, or (i)gnore?
Failed to reload source file list: sbt process never got in touch, 
so unable to handle request WatchTransitiveSourcesRequest(true)

I've seen some talk of this error elsewhere but unlike in those examples I don't have any extra plugins or project dependencies- I get this error when compiling an untouched play-scala template after selecting it with activator new.

Here are those plugins included in the template in project/plugins.sbt:

resolvers += "Typesafe repository" at "http://repo.typesafe.com/typesafe/releases/"

// The Play plugin
addSbtPlugin("com.typesafe.play" % "sbt-plugin" % "2.3.3")

// web plugins
addSbtPlugin("com.typesafe.sbt" % "sbt-coffeescript" % "1.0.0")
addSbtPlugin("com.typesafe.sbt" % "sbt-less" % "1.0.0")
addSbtPlugin("com.typesafe.sbt" % "sbt-jshint" % "1.0.1")
addSbtPlugin("com.typesafe.sbt" % "sbt-rjs" % "1.0.1")
addSbtPlugin("com.typesafe.sbt" % "sbt-digest" % "1.0.0")
addSbtPlugin("com.typesafe.sbt" % "sbt-mocha" % "1.0.0")

The last time I built a Play application was about a month ago and I had no problem, in the meantime I've been compiling vanilla Scala-only apps (often with Activator) without any trouble. Could this be Play-2.3 related?

I have the line sbt.version=0.13.5 in project/build.properties and I've made sure my sbt version is the latest.

My code is exactly that of the play-scala template but in case it makes things easier, here's the contents of build.sbt:

name := """my-first-app"""

version := "1.0-SNAPSHOT"

lazy val root = (project in file(".")).enablePlugins(PlayScala)

scalaVersion := "2.11.1"

libraryDependencies ++= Seq(
  jdbc,
  anorm,
  cache,
  ws
)

Thanks in advance for any help.

EDIT:

Doing sbt about from the app root directory I get this error which I'll include in full:

$ sbt about
[info] Loading global plugins from /home/.sbt/0.13/plugins
[info] Loading project definition from /home/my-first-app/project
/home/my-first-app/build.sbt:5: error: value enablePlugins is not a member of sbt.Project
lazy val root = (project in file(".")).enablePlugins(PlayScala)
                                       ^
sbt.compiler.EvalException: Type error in expression
    at sbt.compiler.Eval.checkError(Eval.scala:343)
    at sbt.compiler.Eval.compileAndLoad(Eval.scala:165)
    at sbt.compiler.Eval.evalCommon(Eval.scala:135)
    at sbt.compiler.Eval.evalDefinitions(Eval.scala:109)
    at sbt.EvaluateConfigurations$.evaluateDefinitions(EvaluateConfigurations.scala:197)
    at sbt.EvaluateConfigurations$.evaluateSbtFile(EvaluateConfigurations.scala:99)
    at sbt.Load$.sbt$Load$$loadSettingsFile$1(Load.scala:507)
    at sbt.Load$$anonfun$sbt$Load$$memoLoadSettingsFile$1$1.apply(Load.scala:502)
    at sbt.Load$$anonfun$sbt$Load$$memoLoadSettingsFile$1$1.apply(Load.scala:501)
    at scala.Option.getOrElse(Option.scala:120)
    at sbt.Load$.sbt$Load$$memoLoadSettingsFile$1(Load.scala:501)
    at sbt.Load$$anonfun$loadSettings$1$2.apply(Load.scala:500)
    at sbt.Load$$anonfun$loadSettings$1$2.apply(Load.scala:500)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
    at scala.collection.AbstractTraversable.map(Traversable.scala:105)
    at sbt.Load$.loadSettings$1(Load.scala:500)
    at sbt.Load$.sbt$Load$$expand$1(Load.scala:523)
    at sbt.Load$.loadSettings(Load.scala:528)
    at sbt.Load$.sbt$Load$$loadSbtFiles$1(Load.scala:464)
    at sbt.Load$.defaultLoad$1(Load.scala:475)
    at sbt.Load$.loadTransitive(Load.scala:478)
    at sbt.Load$.loadProjects$1(Load.scala:418)
    at sbt.Load$.loadUnit(Load.scala:419)
    at sbt.Load$$anonfun$15$$anonfun$apply$11.apply(Load.scala:256)
    at sbt.Load$$anonfun$15$$anonfun$apply$11.apply(Load.scala:256)
    at sbt.BuildLoader$$anonfun$componentLoader$1$$anonfun$apply$4$$anonfun$apply$5$$anonfun$apply$6.apply(BuildLoader.scala:93)
    at sbt.BuildLoader$$anonfun$componentLoader$1$$anonfun$apply$4$$anonfun$apply$5$$anonfun$apply$6.apply(BuildLoader.scala:92)
    at sbt.BuildLoader.apply(BuildLoader.scala:143)
    at sbt.Load$.loadAll(Load.scala:312)
    at sbt.Load$.loadURI(Load.scala:264)
    at sbt.Load$.load(Load.scala:260)
    at sbt.Load$.load(Load.scala:251)
    at sbt.Load$.apply(Load.scala:134)
    at sbt.Load$.defaultLoad(Load.scala:37)
    at sbt.BuiltinCommands$.doLoadProject(Main.scala:473)
    at sbt.BuiltinCommands$$anonfun$loadProjectImpl$2.apply(Main.scala:467)
    at sbt.BuiltinCommands$$anonfun$loadProjectImpl$2.apply(Main.scala:467)
    at sbt.Command$$anonfun$applyEffect$1$$anonfun$apply$2.apply(Command.scala:60)
    at sbt.Command$$anonfun$applyEffect$1$$anonfun$apply$2.apply(Command.scala:60)
    at sbt.Command$$anonfun$applyEffect$2$$anonfun$apply$3.apply(Command.scala:62)
    at sbt.Command$$anonfun$applyEffect$2$$anonfun$apply$3.apply(Command.scala:62)
    at sbt.Command$.process(Command.scala:95)
    at sbt.MainLoop$$anonfun$1$$anonfun$apply$1.apply(MainLoop.scala:100)
    at sbt.MainLoop$$anonfun$1$$anonfun$apply$1.apply(MainLoop.scala:100)
    at sbt.State$$anon$1.process(State.scala:179)
    at sbt.MainLoop$$anonfun$1.apply(MainLoop.scala:100)
    at sbt.MainLoop$$anonfun$1.apply(MainLoop.scala:100)
    at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:18)
    at sbt.MainLoop$.next(MainLoop.scala:100)
    at sbt.MainLoop$.run(MainLoop.scala:93)
    at sbt.MainLoop$$anonfun$runWithNewLog$1.apply(MainLoop.scala:71)
    at sbt.MainLoop$$anonfun$runWithNewLog$1.apply(MainLoop.scala:66)
    at sbt.Using.apply(Using.scala:25)
    at sbt.MainLoop$.runWithNewLog(MainLoop.scala:66)
    at sbt.MainLoop$.runAndClearLast(MainLoop.scala:49)
    at sbt.MainLoop$.runLoggedLoop(MainLoop.scala:33)
    at sbt.MainLoop$.runLogged(MainLoop.scala:25)
    at sbt.StandardMain$.runManaged(Main.scala:57)
    at sbt.xMain.run(Main.scala:29)
    at xsbt.boot.Launch$$anonfun$run$1.apply(Launch.scala:109)
    at xsbt.boot.Launch$.withContextLoader(Launch.scala:129)
    at xsbt.boot.Launch$.run(Launch.scala:109)
    at xsbt.boot.Launch$$anonfun$apply$1.apply(Launch.scala:36)
    at xsbt.boot.Launch$.launch(Launch.scala:117)
    at xsbt.boot.Launch$.apply(Launch.scala:19)
    at xsbt.boot.Boot$.runImpl(Boot.scala:44)
    at xsbt.boot.Boot$.main(Boot.scala:20)
    at xsbt.boot.Boot.main(Boot.scala)
[error] sbt.compiler.EvalException: Type error in expression
[error] Use 'last' for the full log.
Project loading failed: (r)etry, (q)uit, (l)ast, or (i)gnore? 

Doing it from outside the app directory I get:

$ sbt about
[info] Loading global plugins from /home/.sbt/0.13/plugins
[info] Set current project to / (in build file:/home/)
[info] This is sbt 0.13.5
[info] The current project is {file:/home/} 0.1-SNAPSHOT
[info] The current project is built against Scala 2.10.4
[info] Available Plugins: sbt.plugins.IvyPlugin, sbt.plugins.JvmPlugin, sbt.plugins.CorePlugin, sbt.plugins.JUnitXmlReportPlugin, EnsimePlugin, com.typesafe.sbt.SbtScalariform
[info] sbt, sbt plugins, and build definitions are using Scala 2.10.4

by TrustyPatches at September 17, 2014 09:52 AM

How create akka actor with different path?

If create an actor (without name) it will live in path akka.tcp://system@192.168.1.2:2552/user/$a#-576914160. If it possible to create an actor directly in myGpurp sub path? E.g. result actor path will be akka.tcp://system@192.168.1.2:2552/user/myGpurp/$a#-576914160.

by Cherry at September 17, 2014 09:47 AM

CompsciOverflow

Build a recursive descent parser for a given grammar

Write a recursive descent parser for the following grammar:

    S -> if C then S ; 
        | while C do S ; 
        | id = num | id++
    C -> id == num | id != num

source of the question is:http://1drv.ms/1md7Y7h

by JUSTIN JOHNS at September 17, 2014 09:32 AM

/r/compsci

Help with Lambda Calculus

I keep hearing that lambda calculus is extremely simple, but for some reason I just can't grasp it. Any help is appreciated.

submitted by sandyjawas
[link] [13 comments]

September 17, 2014 09:29 AM

StackOverflow

How to update Mongodb structure from string to object?

I need to update image string to object, my old code is :

Scala :

case class Property (
  id: ObjectId = new ObjectId,
  image: String,
  description: String
)

Mongodb :

{
   "_id": ObjectId("5412b438e864b9afc27dcd43"),
   "_t": "models.Property",
   "image": "http: \/\/img.com\/8i0\/v1jj24e5by8nz4dy2bh6nnespw1i",
   "description": "image"
}

Now my updates are :

Scala :

case class Image (
  imageUrl: String, 
  isHosted: Boolean, 
  imageThumbUrl: String, 
  imageMediumUrl: String
)

case class Property (
  id: ObjectId = new ObjectId,
  image: Image,
  description: String
)

But when I do this, I'm getting [Exception: class models.Property requires value for 'image'] this is due to old db structure, is there a query way to convert old data to new structure or what code should I add to adopt to new structure changes? Please help, Thanks in Advance.

by Monnster at September 17, 2014 09:25 AM

/r/emacs

StackOverflow

Cross product in Scala

I want to have a binary operator cross (cross-product/cartesian product) that operates with traversables in Scala:

val x = Seq(1, 2)
val y = List('hello', 'world', 'bye')
val z = x cross y    # i can chain as many traversables e.g. x cross y cross w etc

assert z == ((1, 'hello'), (1, 'world'), (1, 'bye'), (2, 'hello'), (2, 'world'), (2, 'bye'))

What is the best way to do this in Scala only (i.e. not using something like scalaz)?

by wrick at September 17, 2014 09:22 AM

CompsciOverflow

How to show that this algorithm for evaluating polynomials works?

I'm having trouble showing how to solve this problem in particular the part where it asks "To Show that the following pseudo-code fragment finds the value of the polynomial..."

How do I exactly show that? I don't understand what that would entail and my professor isn't exactly that helpful he says to prove it for all $n$, but I don't understand how to show that mathematically through programming. He says not just to give a particular example, but rather show it works for all polynomials.

The whole question is this:

It is required to find the value of the polynomial $$P(x)=\displaystyle\sum_{k=0}^n a_k x^k$$

Show that the following pseudo-code fragment finds the value of the polynomial, given the coefficients $a_0,a_1,a_2,...a_n$ for a value of $x$.

y = 0;
i = n;
while (i >= 0) {
   y = a[i] + x * y;
   i = i - 1;
}

by TwilightSparkleTheGeek at September 17, 2014 09:15 AM

smaller size approximation to minimum vertex cover

Does there exist a simple approximation to the minimum vertex cover problem that aims to find a smaller (or equal) set to the minimum?

Usual algorithms seems to aim to find an approximation such that the output is a cover, but may have more vertices than the minimum cover. Instead, I want a smaller set - I don't mind if some edges are still left and is therefore not a cover. Between two smaller sets, the one that covers more (total number of edges) is preferable.

by jam123 at September 17, 2014 09:13 AM

Portland Pattern Repository

TheoryOverflow

Hard Instances for graph isomorphism testing

Is the case of strongly regular graphs the hardest one for GI testing?

where "hardest" is used in some "common sense" meaning, or "in average", so to speak.
Wolfram MathWorld mentions some "pathologically hard graphs". What are they?

My sample set of 25 pairs of graphs: http://funkybee.narod.ru/graphs.htm I tested a lot of others but all of the same kind - SRG or RG from http://www.maths.gla.ac.uk/~es/srgraphs.html or of genreg.exe. If I generate, say, 1000 graphs then I test all 1000 * (1000 - 1) / 2 pairs. Of course, I don't test obvious ("silly") cases, e.g., graphs with different sorted vectors of degrees etc. But the process is seemed endless and to some extent smells futile. What testing strategy should I choose? Or is this question almost equal to GI problem itself?

I even re-drew on paper a graph from thesis_pascal_schweitzer.pdf
(suggested by @5501). Its nice pic: http://funkybee.narod.ru/misc/furer.jpg
I'm not sure but seems exactly this kind of graphs "which the k-dimensional
Weisfeiler-Lehman algorithm cannot distinguish."
But, gentlemen, to copy graphs to paper from e-books it's too much even for me.

25

0100000000000000000000000
1010000000000000000000000
0101000000000000000000100
0010100000000010000000000
0001010000001000000000000
0000101000000000000000000
0000010100000000000000000
0000001010000000000000000
0000000101000000000000000
0000000010100000000000000
0000000001010000000000000
0000000000101000000000100
0000100000010000000000010
0000000000000010000001010
0001000000000101000000000
0000000000000010100000000
0000000000000001010000000
0000000000000000101000000
0000000000000000010100000
0000000000000000001010000
0000000000000000000101000
0000000000000100000010100
0010000000010000000001000
0000000000001100000000001
0000000000000000000000010

0100000000000000000000000
1010000000000000000000000
0101000000000000000000100
0010100000000010000000000
0001000000001000000010000
0000001000000000000001000
0000010100000000000000000
0000001010000000000000000
0000000101000000000000000
0000000010100000000000000
0000000001010000000000000
0000000000101000000000100
0000100000010000000000010
0000000000000010000001010
0001000000000101000000000
0000000000000010100000000
0000000000000001010000000
0000000000000000101000000
0000000000000000010100000
0000000000000000001010000
0000100000000000000100000
0000010000000100000000100
0010000000010000000001000
0000000000001100000000001
0000000000000000000000010

Bounty asking:
===========
Could anybody confirm that the 2 last pairs (#34 and #35 in left textarea: http://funkybee.narod.ru/graphs.htm ) are isomorphic?
The matter is that they are based on this: http://funkybee.narod.ru/misc/mfgraph2.jpg from A Counterexample in Graph Isomorphism Testing (1987) by M. Furer but I couldn't get them NON-isomorphic..

PS#1
I took 4 (must be even square of some positive number (m^2)) fundamental pieces, dovetailed them in a row, -- so I got the 1st global graph, in its copy I swaped (crisscrossing) 2 central edges in each of 4 pieces - so I got the 2nd global graph. But they are turned to be isomorphic. What did I miss or misunderstand in Furer's fairytale?

PS#2
Seems I got it.
3 pairs #33, #34 and #35 ( the very last 3 pairs on http://funkybee.narod.ru/graphs.htm ) are really amazing cases.

Pair #34:
        G1 and G2 are non-isomorphic graphs.
        In G1: edges (1-3),(2-4). In G2: edges (1-4),(2-3).
        No more diffs in them.

Pair #35:
        G11 and G22 are isomorphic graphs.
        G11 = G1 and G22 is a copy of G2, with only one difference:
        Edges (21-23),(22-24) were swapped like this: (21-24),(22-23)
        ... and two graphs get isomorphic
        as if 2 swaps annihilate each other.
        Odd number of such swaps make the graphs again NON-isomorphic

Graph #33 (20 vertices, 26 edges) is still this: http://funkybee.narod.ru/misc/mfgraph2.jpg
Graphs from ##34, 35 were made just by coupling 2 basic graphs (#33) -- each getting 40 vertices and 60 = 26 + 26 + 8 edges. By 8 new edges I connect 2 "halves" of that new ("big") graph. Really amazing and exactly as Martin Furer says...

Case #33: g = h            ("h" is "g with one possible edges swap in its middle"
                                                  (see the picture))

Case #34: g + g != g + h        (!!!)


Case #35: g + g = h + h         (!!!)

by trg787 at September 17, 2014 08:59 AM

StackOverflow

Relay triping program

I am using arduino Platform for programming. I wanted to control The relay output Serially and also automatic. Individually they working fine.Now probelm i am facing is If send data Serially say 1 relay say activated But after some time got off.similarly in case when send Zero. Automatic operation working fine.i.e once current exceed 0.6 A it wait for 10s and relay got activate again.

my problem 1) Individually if send 0&1 relay activate and deactivate,Below in my code if send 1 i see output relay activate and deactivate .even it switch off for few second but again come back to on position. 2)another When relay operated in automatic mode,Where i am calling Relay_Activate() function it waits 10s . If i send ativate command within this 10s how to make relay make connected back

int Analog_Pin=5;
int newaverage;
float Output_Current;
#define RELAY1  7
#define MAX_TRIP_COUNT  5
float Current_Limit=0.6;
static int Trip_Count=0;
static int Tripped_Flag=1;
int Serial_Status=0;



void TakeReading()
{
  newaverage = analogRead(A5);
  Output_Current = 0.0336666666667*newaverage - 17.17; 
}


void Chk_Relay_Tripped()
{
  if(Output_Current>=Current_Limit)
  {
    Trip_Count=Trip_Count+1; 
    if((Trip_Count>=MAX_TRIP_COUNT) &&(Serial_Status==0))
    {

      Trip_Count=0;
      Relay_Activate();
    }
    else
      if(Serial_Status==1)
      {
        Serial.println("> MODE");
        Relay_Deactivate(); 
      }

  } 
  else
  {
    Trip_Count=Trip_Count-1;
    if(Trip_Count<0)
    {
      Trip_Count=0;
    }

    if(Trip_Count<MAX_TRIP_COUNT && Serial_Status==1)
    {
      Relay_Deactivate();
      Serial_Status=0;
    }

  }

}




void Relay_Activate()
{
  for (unsigned long start = millis(); millis() - start < 10000;)
  {
    digitalWrite(RELAY1,HIGH);
    Serial_Status=1;
  }

}

void  Relay_Deactivate()
{
  digitalWrite(RELAY1,LOW);
  Serial_Status=0;
}

void Relay_Intialize()
{
  digitalWrite(RELAY1,LOW);
}

void setup() {
  Serial.begin(9600); // set serial speed
  pinMode(RELAY1, OUTPUT); // set LED as output
  //digitalWrite(RELAY1, LOW); //turn off LED

  pinMode(Analog_Pin,INPUT);
  Relay_Intialize();
}




void loop(){
  TakeReading();
  Chk_Relay_Tripped();
  Serial.println(Output_Current);
  Serial.print("out count:");
  Serial.println(Trip_Count);
  Serial.print("Serial_Status:");
  Serial.println(Serial_Status);

  //while (Serial.available() == 0); // do nothing if nothing sent
  int val = Serial.read() - '0';
  if (val == 1) { // test for command 1 then turn on LED
    Serial.println("RELAY on");
    digitalWrite(RELAY1, LOW); // turn on LED
    Serial_Status=1;

  }
  else if (val == 0) // test for command 0 then turn off LED
  {
    Serial.println("RELAY OFF");
    digitalWrite(RELAY1, HIGH); // turn off LED
    Serial_Status=0;
  }
  delay(500);
}

by AMPS at September 17, 2014 08:58 AM

/r/emacs

CompsciOverflow

Finite Automata [duplicate]

This question already has an answer here:

Model marbel toy with finite automata. Exercise 2.2.1(a) from Introduction to Automata Theory Languages and Computation book.

enter image description here

The transition table looks:

enter image description here

My question is about this table. How to fill it?Plesae short explanation of filling the transition table.

by Number 1010001 at September 17, 2014 08:44 AM

StackOverflow

How to avoid losing type information

Suppose I have something like this:

trait Cursor {
}

trait Column[T] {
   def read(cusor: Cursor): T
}

trait ColumnReader {
   def readColumns(columns: Product[Column[_]], cursor: Cursor): Iterable[Any] = {
       for (column <- columns) yield column.read(cursor)
   }
}

The problem of the readColumns() API is that I lose the type information, i.e., if I have this:

object columnString extends Column[String] {
   def read(cursor: Cursor): String = ...
}

object columnInt extends Column[Int] {
   def read(cursor: Cursor): Int = ...
}

An expression like new ColumnReader().readColumns((columnString, columnInt)) returns Iterable[Any]. I would like to return something typed like Tuple2[String, Int], but don't know how. I lose type information useful to the compiler.

Maybe a library like Shapeless could be useful.

I'm sure Scala has some tool for dealing on problems like this.

Any ideas?

by david.perez at September 17, 2014 08:37 AM

Generically Finding Max Item in List

In Haskell I wrote a function that, given a List of a, returns Maybe a.

max' :: Ord a => [a] -> Maybe a
max' []     = Nothing
max' (x:xs) = Just $ foldr (\y acc -> if (y > acc) then y else acc) x xs

How could I write this in Scala? I'm not sure of an Ord equivalent in Scala.

by Kevin Meredith at September 17, 2014 08:15 AM

Planet Theory

Congratulations to MacArthur Fellowship

My hearty congratulations to MacArthur Fellowship for handing down the right decision and naming Craig Gentry its fellow, better known as a genius. What a truly deserving winner! As the readers of this blog know full well, Craig has done seminal work in cryptography – time and time again. In his prize-winning Ph.D. work in 2009 Craig achieved what many had considered to be impossible – fully-homomorphic encryption. In short three years he (with co-authors, Sanjam Garg and Shai Halevi) proposed another object – a cryptographic multilinear map whose existence I’d been willing to bet against. Last year Craig (with several more co-authors) constructed an obfuscation mechanism with two amazing properties: it looks impossibly difficult to achieve and useless for any cryptographic applications. Both statements – you see the pattern here – are wrong. Indistinguishability obfuscation, as it has become known, quite plausibly exists and we are still in the process of grasping its true potential.

Congratulations!


by Ilya Mironov at September 17, 2014 08:03 AM

Lamer News

QuantOverflow

Why is two-factor model so popular for bond futures?

Given that which bond in the basket becomes CTD depends massively on idiosyncratic moves among different bonds, should we not be always using N factor model instead of 2 Factor model?

By using only 2 Factors we are only capturing Slope and Level changes but ignoring curvature and other higher order movements which should, in theory, be also very important for determining CTD, especially when we have lots of closely contending bonds.

By 2 Factor model, I mean picking up the first two Eigenvectors from PCA and then using these to model yields.

by InnocentR at September 17, 2014 07:49 AM

StackOverflow

Can someone please explain the right way to use SBT?

I'm getting out off the closet on this! I don't understand SBT. There, I said it, now help me please.

All roads lead to Rome, and that is the same for SBT: To get started with SBT there is SBT, SBT Launcher, SBT-extras, etc, and then there are different ways to include and decide on repositories. Is there a 'best' way?

I'm asking because sometimes I get a little lost. The SBT documentation is very thorough and complete, but I find myself not knowing when to use build.sbt or project/build.properties or project/Build.scala or project/plugins.sbt.

Then it becomes fun, there is the Scala-IDE and SBT - What is the correct way of using them together? What comes first, the chicken or the egg?

Most importantly is probably, how do you find the right repositories and versions to include in your project? Do I just pull out a machette and start hacking my way forward? I quite often find projects that include everything and the kitchen sink, and then I realize - I'm not the only one who gets a little lost.

As a simple example, right now, I'm starting a brand new project. I want to use the latest features of SLICK and Scala and this will probably require the latest version of SBT. What is the sane point to get started, and why? In what file should I define it and how should it look? I know I can get this working, but I would really like an expert opinion on where everything should go (why it should go there will be a bonus).

I've been using SBT for small projects for well over a year now. I used SBT and then SBT Extras (as it made some headaches magically disappear), but I'm not sure why I should be using the one or the other. I'm just getting a little frustrated for not understanding how things fit together (SBT and repositories), and think it will save the next guy coming this way a lot of hardship if this could be explained in human terms.

UPDATE:

For what it's worth, I created a blank SBT project directory for new guys to get going quicker: SBT-jumpstart

by JacobusR at September 17, 2014 07:49 AM

Variables defined in group_vars/file used within same file not working

Here's what i am trying to do with ansiblef:

group_vars/file1

file1 looks something like below:

var1: value1

var2: /some-path/{{ var1 }}

On execution of playbook on target nodes the output is like below:

/some-path/{}

Isn't the variable substitution supposed to work in this manner ?

by user3228188 at September 17, 2014 07:42 AM

Typesafe Activator: Use own template repository

Starting the activator with activator new you get a list of project seeds and you also can get templates. Is it possible to have one's own source/repository of seeds and templates or is there a possibility to configure to not use the typesafe ones?

I want to achieve something like this:

$~- activator new

Fetching the latest list of templates...

Browse the list of templates: http://my-templates
Choose from these featured templates or enter a template name:
1) My-Own-Seed
2) CompanyConfiguration

(hit tab to see a list of all templates)

I know of g8 but I don't want to use it because I like the tight integration with sbt of activator.

EDIT: I found documentation at https://typesafe.com/activator/template/contribute but I'm not sure if I can achieve that the seeds are just usable within an enclosed environment.

by Andreas Neumann at September 17, 2014 07:39 AM

Difference between scan and scanLeft in Scala [duplicate]

This question already has an answer here:

Which are the differences between scan and scanLeft ?

For instance,

(1 to 3).scan(10)(_-_)
res: Vector(10, 9, 7, 4)

(1 to 3).scanLeft(10)(_-_)
res: Vector(10, 9, 7, 4)

deliver the same result, clearly in contrast to

(1 to 3).scanRight(10)(_-_)
res: Vector(-8, 9, -7, 10)

by enzyme at September 17, 2014 07:38 AM

Execute Scala Futures in serial one after the other

Given a method that returns a Future like this...

def myMethod(name: String, count: Int, default: Boolean): Future[Unit] = {
  ...
}

... I need to invoke it N times, so I've defined a list of tuples containing the parameters to be passed:

val paramList = List(
  ("text1", 22, true),
  ("text2", 55, true),
  ("text3", 77, false)
)

How do I invoke myMethod with Future.traverse?

Future.traverse(paramList)(myMethod _).tupled(/* how do I pass the current tuple here? */)

by j3d at September 17, 2014 07:21 AM

QuantOverflow

Logic behind Gordon Growth Model in a DCF analysis?

Sorry, I wanted to ask this on the finance/money forum, but they don't support LaTeX there.


Let's say we are valuing a company using the DCF methodology with a 5-year projection period.

We project free cash flows of $F_{1},\ldots,F_{5}$. Then if $w$ is the WACC of this company and $g$ is the perpetual growth rate from year 5 forward, the sum of the future cash flows discounted at $w$ is

$$V_{1}:=F_{1}(1+w)^{-1}+\ldots+F_{5}(1+w)^{-5}+F_{5}\sum_{t=6}^{\infty}\frac{(1+g)^{t-5}}{(1+w)^{t}}.$$

This formula for the Gordon Growth model replaces the infinite sum with the easily computed geometric series $$F_{5}\sum_{t=1}^{\infty}\frac{(1+g)^{t}}{(1+w)^{t}}=F_{5}\frac{1+g}{w-g},$$ and therefore (basically) DOUBLE COUNTS (!!) the cash flows $F_{1},\ldots,F_{5}$ to get $$\begin{align*} V_{2}&:=F_{1}(1+w)^{-1}+\ldots+F_{5}(1+w)^{-5}+F_{5}\sum_{t=1}^{\infty}\frac{(1+g)^{t}}{(1+w)^{t}}\\ &=F_{1}(1+w)^{-1}+\ldots+F_{5}(1+w)^{-5}+F_{5}\frac{1+g}{w-g}\\ &\gg V_{1}.\end{align*}$$

What am I missing here?

EDIT

Even if you could convince me of the legitimacy of $F_{5}(1+g)^{t-5}\mapsto F_{1}(1+g)^{t}$ in order to get a uniformly indexed sum (and hence a geometric series), i.e. $F_{5}$ equals the 6-fold growth of $F_{0}$ before we first start to sum it, I would still be very hard to convince of the legitimacy that we should also not truncate the series and re-index the sum at $t=1$.

by Taylor Martin at September 17, 2014 07:07 AM

StackOverflow

Upsert in Slick

Is there a way I can neatly do an upsert operation in Slick? The following works but is too obscure/verbose and I need to explicitly state the fields that should be updated:

val id = 1
val now = new Timestamp(System.currentTimeMillis)
val q = for { u <- Users if u.id === id } yield u.lastSeen 
q.update(now) match {
  case 0 => Users.insert((id, now, now))
  case _ => Unit
}

by Synesso at September 17, 2014 07:03 AM

Portland Pattern Repository

StackOverflow

How to assign threads to per request actors in Akka

I have a consistent hashing router (scala/akka) that assigns a particular message type a to a particular set of per-request actors A, and a message typeb to a particular set of per-request actors B... etc. Question is: how do I appropriate the set of actors A to a its own thread A, and the set of actors B to its own thread B?

The hope is that actor set A does not block actor set B and B does not block A. Sadly, my Akka dispatcher experience lacks, as does my multithreading experience. Thanks.

by Lasf at September 17, 2014 06:54 AM

Building a list of "incremental sums"

What is an efficient, functional way of building a list of "incremental sums"

For example, given

val (a,b,c,d) = (2,3,5,6)
val list1 = List(a, b, c, d)

How would you implement f such as:

list1.map(f)

would result in

List(a, a+b, a+b+c, a+b+c+d)

by snappy at September 17, 2014 06:49 AM

split sentence into words separated by more than one space

val sentence = "1 2  3   4".split(" ")

The above gives me:

Array(1, 2, "", 3, "", "", 4)

But I need only the words:

Array(1, 2, 3, 4)

How can I split the sentence when the words are separated by multiple spaces?

by anonymous_123 at September 17, 2014 06:48 AM

CompsciOverflow

Do poly-computable differentiable functions on [0,1] with bounded number of turning points have poly-time computable inverse?

  1. Given a polynomially computable continuous function which is a composite of m strictly monotone functions, can we guarantee the existence of polynomially computable inverse?

  2. The function I have in mind is a rational function and all the coefficients (in the numerator and denominator) are positive integers but are unknown. I realize that for strictly monotone functions, as long as it is honest (arbitrarily long fractions don't get mapped to short ones - are rational functions honest?) a binary search algorithm will find the inverse. But in case of existence of m maxima and minima, how to devise an appropriate algorithm that won't get stuck?

  3. We are of course restricting the function to $\mathbb{Q}\to \mathbb{Q}$ and for a given $q\in \mathbb{Q}$, for our purposes, it will suffice to have a polynomial time algorithm to find the minimum (maximum) approximate rational pre-image, where the error $\epsilon$ can be made arbitrarily small in time polynomial in $\epsilon^{-1}$.

by Daniels Pictures at September 17, 2014 06:29 AM

StackOverflow

Play framework - retrieving the Date header in the request

I need to access the Date: header when I handle the request, but this seems to be "swallowed" by the framework; any other header (even made up FooBar ones) show up and I can get them, but this gives me None (I'm using Postman to send a simple GET request - everything else works just fine):

println("Date: " + request.headers.get("Date").getOrElse("no date!"))

returns "no date!" no matter how I try to send something sensible.

I'm wondering whether this gets processed before the request object reaches my Action.

I need the actual string value sent, as this should be part of the request's signature - so an equivalent Date object representing the same value would not be of much use (as it needs to be part of the hash, to avoid replay attacks).

Just as a test, I replaced the Date header with a Date-Auth one, and this one shows up just fine:

ArrayBuffer((Date-Auth, ArrayBuffer(Wed, 15 Nov 2014 06:25:24 GMT))

Any ideas or suggestions greatly appreciated!

by Marco at September 17, 2014 06:24 AM

Scala 2.11.2 ScriptEngine throws error

I'm trying to run Scala ScriptEngine in InteliJ IDEA Scala Worksheet (Scala 2.11.2)

Next code:

import javax.script.ScriptEngineManager
val e = (new ScriptEngineManager()).getEngineByName("scala")
e.eval("1 to 10 foreach println")

Throws error:

e: javax.script.ScriptEngine = scala.tools.nsc.interpreter.IMain@49049a04
[init] error: error while loading Object, Missing dependency 'object scala in compiler mirror', required by C:\Program Files\Java\jdk1.8.0_11\jre\lib\rt.jar(java/lang/Object.class)

Failed to initialize compiler: object scala in compiler mirror not found.
** Note that as of 2.8 scala does not assume use of the java classpath.
** For the old behavior pass -usejavacp to scala, or if using a Settings
** object programatically, settings.usejavacp.value = true.
scala.reflect.internal.MissingRequirementError: object scala in compiler mirror not found.
    at scala.reflect.internal.MissingRequirementError$.signal(D:/workspace/Poster/src/test.sc:13)
    at scala.reflect.internal.MissingRequirementError$.notFound(D:/workspace/Poster/src/test.sc:14)
    at scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(D:/workspace/Poster/src/test.sc:49)
    at scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(D:/workspace/Poster/src/test.sc:62)
    at scala.reflect.internal.Mirrors$RootsBase.getPackage(D:/workspace/Poster/src/test.sc:169)
    at scala.reflect.internal.Definitions$DefinitionsClass.ScalaPackage$lzycompute(D:/workspace/Poster/src/test.sc:157)
    at scala.reflect.internal.Definitions$DefinitionsClass.ScalaPackage(D:/workspace/Poster/src/test.sc:157)
    at scala.reflect.internal.Definitions$DefinitionsClass.ScalaPackageClass$lzycompute(D:/workspace/Poster/src/test.sc:158)
    at scala.reflect.internal.Definitions$DefinitionsClass.ScalaPackageClass(D:/workspace/Poster/src/test.sc:158)
    at scala.reflect.internal.Definitions$DefinitionsClass.init(D:/workspace/Poster/src/test.sc:1373)
    at scala.tools.nsc.Global$Run.<init>(D:/workspace/Poster/src/test.sc:1225)
    at scala.tools.nsc.interpreter.IMain.compileSourcesKeepingRun(D:/workspace/Poster/src/test.sc:384)
    at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.compileAndSaveRun(D:/workspace/Poster/src/test.sc:803)
    at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.compile(D:/workspace/Poster/src/test.sc:762)
    at scala.tools.nsc.interpreter.IMain.bind(D:/workspace/Poster/src/test.sc:626)
    at scala.tools.nsc.interpreter.IMain.bind(D:/workspace/Poster/src/test.sc:663)
    at scala.tools.nsc.interpreter.IMain$$anonfun$quietBind$1.apply(D:/workspace/Poster/src/test.sc:662)
    at scala.tools.nsc.interpreter.IMain$$anonfun$quietBind$1.apply(D:/workspace/Poster/src/test.sc:662)
    at #worksheet#.#worksheet#(D:/workspace/Poster/src/test.sc:200)

build.sbt as follows:

name := "Poster"

version := "1.0"

libraryDependencies += "org.seleniumhq.selenium" % "selenium-java" % "2.42.2"

libraryDependencies += "org.scala-lang" % "scala-compiler" % "2.11.2"

libraryDependencies += "org.scala-lang" % "scala-library" % "2.11.2"

All needed dependencies included, I don't understand why it is don't work.

Same project in Eclipse Luna + Eclipse IDE 4 works fine!

How to run it in InteliJ IDEA?

by Lunigorn at September 17, 2014 06:07 AM

How to mark just one snapshot dependency "offline"?

In my sbt project, there are some dependencies, some of them are snapshots:

libraryDependencies ++= Seq(
  "aaa" % "bbb" % "1.1.0-SNAPSHOT",
  "aaa" % "ccc" % "0.2.0-SNAPSHOT"
)

The problem is when I run sbt or sbt update, the snapshots will be downloaded each time.

I see in sbt, there is a offline config, if I set it to false, none of the snapshots will be downloaded.

But what if I want only one of them not download, e.g. "aaa" % "bbb" % "1.1.0-SNAPSHOT", since it's slow for example. But the other snapshots will be download as well?

by Freewind at September 17, 2014 06:07 AM

Remove standard english language stop words in Stanford Topic Modeling Toolbox

I am using Stanford Topic Modeling Toolbox 0.4.0 for LDA, I noticed that if I want to remove standard english language stop words, I can use a StopWordFilter("en") as the last step the tokenizer, but how do I use it?

import scalanlp.io._;
import scalanlp.stage._;
import scalanlp.stage.text._;
import scalanlp.text.tokenize._;
import scalanlp.pipes.Pipes.global._;

import edu.stanford.nlp.tmt.stage._;
import edu.stanford.nlp.tmt.model.lda._;
import edu.stanford.nlp.tmt.model.llda._;

val source = CSVFile("pubmed-oa-subset.csv") ~> IDColumn(1);

val tokenizer = {
  SimpleEnglishTokenizer() ~>            // tokenize on space and punctuation
  CaseFolder() ~>                        // lowercase everything
  WordsAndNumbersOnlyFilter() ~>         // ignore non-words and non-numbers
  MinimumLengthFilter(3)                 // take terms with >=3 characters
  StopWordFilter("en")                   // how to use it? it's not working.
}

val text = {
  source ~>                              // read from the source file
  Column(4) ~>                           // select column containing text
  TokenizeWith(tokenizer) ~>             // tokenize with tokenizer above
  TermCounter() ~>                       // collect counts (needed below)
  TermMinimumDocumentCountFilter(4) ~>   // filter terms in <4 docs
  TermDynamicStopListFilter(30) ~>       // filter out 30 most common terms
  DocumentMinimumLengthFilter(5)         // take only docs with >=5 terms
}

// turn the text into a dataset ready to be used with LDA
val dataset = LDADataset(text);

// define the model parameters
val params = LDAModelParams(numTopics = 30, dataset = dataset,
  topicSmoothing = 0.01, termSmoothing = 0.01);

// Name of the output model folder to generate
val modelPath = file("lda-"+dataset.signature+"-"+params.signature);

// Trains the model: the model (and intermediate models) are written to the
// output folder.  If a partially trained model with the same dataset and
// parameters exists in that folder, training will be resumed.
TrainCVB0LDA(params, dataset, output=modelPath, maxIterations=1000);

// To use the Gibbs sampler for inference, instead use
// TrainGibbsLDA(params, dataset, output=modelPath, maxIterations=1500);

by john at September 17, 2014 06:03 AM

Portland Pattern Repository

Fred Wilson

Toshi

Our portfolio company Coinbase announced something yesterday that went largely unnoticed, but might be one of the most important things to happen in the Bitcoin space in a while.

They put out a bunch of developer tools under the name Toshi, including a full open source version of their Bitcoin node. When you combine Toshi with the core Bitcoin APIs it comes with and the Coinbase APIs, you get a platform for building Bitcoin applications that is unmatched in the market.

The reality is building on top of the Bitcoin Core is not a simple task. There is a lot you need to do to make it work. Coinbase has been building on top of the Bitcoin Core for over two years and has addressed many (most?) of the obvious needs and they are now making all of that technology available to developers who want to build Bitcoin applications but don’t want to get knee deep in the Bitcoin Core.

There is a free hosted version of Toshi, you can download and run Toshi on your own servers, or you deploy Toshi to Heroku with just one click.

If you are building Bitcoin applications or thinking about it, check out Toshi. I think making Bitcoin easier for developers is a big thing and I’m pleased to see Coinbase doing exactly that.

by Fred Wilson at September 17, 2014 06:00 AM

/r/osdev

StackOverflow

How get actor physical path from akka actor?

Here is the actor code:

import akka.actor.Actor

class OneActor extends Actor {
    def receive = {
        //what I should call here to get ?
        case _  => println(physicalPaht)
    }
}

I can use some "build-in" variables:

  • context - but it do not contains any usefull method about path
  • actorPath - contains only local path

Any ideas?

Updated

Also thre is a self.path.address but it returns only path to root actor.

by Cherry at September 17, 2014 05:34 AM

CompsciOverflow

Is "ternary search" an appropriate term for the algorithm that optimizes a unimodal function on a real interval?

Suppose that I want to optimize a unimodal function defined on some real interval. I can use the well-known algorithm as described in Wikipedia under the name of ternary search.

In case of the algorithm that repeatedly halving intervals, it is common to reserve the term binary search for discrete problems and to use the term bisection method otherwise. Extrapolating this convention, I suspect that the term trisection method might apply to the algorithm that solves my problem.

My question is whether it is common among academics, and is safe to use in, e.g., senior theses, to apply the term ternary search even if the algorithm is applied to a continuous problem. I need a reputable source for this. I'm also interested whether the term trisection method actually exists.

by Pteromys at September 17, 2014 05:28 AM

StackOverflow

Pinning list elements to position in a merged list in Scala

This seems like a simple scenario, but I'm stumped on how to solve it elegantly/functionally. I have two lists val pinnedStrings: Seq[(String, Int)] and val fillerString: Seq[Int]. I want to merge them, but with each pinned string guaranteed to be at its paired position in the output list. So if I have:

val pinnedStrings = Seq("apple" -> 1, "banana" -> 4, "cherry" -> 6)
val fillerStrings = Seq("alpha", "bravo", "charlie", "delta", "echo", "foxtrot") 

Then the output should be:

Seq("alpha", "apple", "bravo", "charlie", "banana", "delta", "cherry", "echo", "foxtrot")

Let's say that if there's not enough filler to reach a pinned string, we drop the pinned string. (Or if it's simpler to put all leftover pinned strings at the end, that's fine too.)

by acjay at September 17, 2014 05:25 AM

Lambda the Ultimate

What's in store for the most widely used language by discerning hackers?

Or, in other words, what's the future of Emacs Lisp (and unavoidable HN discussion).

The original message contains some interesting tidbits. I am not sure how the discussion on emacs-devel will develop. But speculating about things such as Guile elisp is, of course, our bailiwick.

September 17, 2014 05:14 AM

StackOverflow

Flattening a map of sets

I am trying to flatten a map where the keys are traversables, in the sense that:

Map( Set(1, 2, 3) -> 'A', Set(4, 5, 6) -> 'B')

should flatten to:

Map(5 -> B, 1 -> A, 6 -> B, 2 -> A, 3 -> A, 4 -> B)

Here is what I did:

def fuse[A, B, T <: Traversable[A]](mapOfTravs: Map[T, B]): Map[A, B] = {
  val pairs = for {
    trav <- mapOfTravs.keys
    key <- trav
  } yield (key, mapOfTravs(trav))
  pairs.toMap
}   

It works. But:

  1. Is there a simpler way to do this?

  2. I'm not very comfortable with the Scala type system and I'm sure this can be improved. I have to specify the types explicitly whenever I use my function:

    val map2 = Map( Set(1, 2, 3) -> 'A', Set(4, 5, 6) -> 'B')
    val fused2 = fuse[Int, Char, Set[Int]](map2)
    
    val map1: Map[Traversable[Int], Char] = Map( Set(1, 2, 3) -> 'A', Set(4, 5, 6) -> 'B')
    val fused1 = fuse[Int, Char, Traversable[Int]](map1)
    

P.S.: this fuse function does not make much sense when the key traversables have a non-null intersection.

by toto2 at September 17, 2014 04:53 AM

Planet Theory

Maryland Theory Day October 10!


Univ of Maryland at College Park is having a Theory Day
Friday October 10.

Free Registration and Free Lunch! (there are no economists coming to tell us there is no such thing).

For Information and Registration goto here

A good way to learn lots of current theory in a short time.

Schedule:

8:30-9:00 Light Breakfast and Intro Remarks

9:00-9:20  Gasarch, UMCP
NIM with Cash

9:25-9:45  Mount, UMCP
A New Algorithm for Approximating the Euclidean Minimum Spanning Tree

9:50-10:10 Samir, UMCP:
To do or not to do: scheduling to minimize energy

10:20-11:00 Coffee Break

11:00-12:00 Distinguished Invited Speaker Avrim Blum, CMU
Reconstructing preferences and priorities from opaque transactions

12:00-1:00 Catered Lunch

1:00-2:00 Distinguished Invited Speaker Sanjeev Arora, Princeton
Overcoming the intractability bottleneck in unsupervised learning.

2:00-2:30 Coffee Break

2:30-2:50 Elaine Shi, UMCP
Circuit ORAM and Tightness of the Goldreich-Ostrovksy bound

2:55-3:15 David Harris, UMCP
The Moser-Tardos Framework with Partial Resampling

3:20-3:40 Mohammad Hajiaghayi, UMCP
Streaming Algorithms for Estimating the Matching Size in Planar Graphs and Beyond

3:45-4:05 Michael Dinitz, JHU
Explicit Expanding Expanders

4:10-5:00 Poster Session in 2120 (Grad Students)


by GASARCH (noreply@blogger.com) at September 17, 2014 04:48 AM

StackOverflow

Why trait with implicit values should be put in the beginning of a scala file? [duplicate]

I have a scala file which uses a trait to provide implicit values:

class A
class B

class Service  {
  def check(implicit a:A, b:B) = println("hello")
}

object Main extends App with Dependencies {

  def run() {
    val service = new Service
    service.check
  }

}

trait Dependencies {
  implicit val a = new A
  implicit val b = new B
}

But it can't be compiled, the error is:

 error: could not find implicit value for parameter a: A
           service.check
               ^

But, if I put the trait Dependencies to the beginning of the file:

trait Dependencies {
  implicit val a = new A
  implicit val b = new B
}

// other code are here

It compiles!

Why we must put it at the beginning?

by Freewind at September 17, 2014 04:15 AM

XKCD

Lobsters

/r/netsec

StackOverflow

Clojure zip function

I need to build a seq of seqs (vec of vecs) by combining first, second, etc elements of the given seqs.

After a quick searching and looking at the cheat sheet. I haven't found one and finished with writing my own:

(defn zip 
  "From the sequence of sequences return a another sequence of sequenses
  where first result sequense consist of first elements of input sequences
  second element consist of second elements of input sequenses etc.

  Example:

  [[:a 0 \\a] [:b 1 \\b] [:c 2 \\c]] => ([:a :b :c] [0 1 2] [\\a \\b \\c])"
  [coll]
  (let [num-elems (count (first coll))
        inits (for [_ (range num-elems)] [])]
    (reduce (fn [cols elems] (map-indexed
                              (fn [idx coll] (conj coll (elems idx))) cols))
        inits coll)))

I'm interested if there is a standard method for this?

by alun at September 17, 2014 02:52 AM

Scala.NotImplementedError: an implementation is missing?

Here is my code:

package example

object Lists {

  def max(xs: List[Int]): Int = {
    if(xs.isEmpty){
        throw new java.util.NoSuchElementException()
    }
    else {
        max(xs.tail)
    }
  }
}

When i run it in sbt console:

scala> import example.Lists._
scala> max(List(1,3,2))

I have the following error:

Scala.NotImplementedError: an implementation is missing

How can I fix that?

Thanks.

by Michael Vayvala at September 17, 2014 02:29 AM

/r/compsci

how do I get interested in algorithm analysis

I'm a seasoned web developer, in my day job I don't have to use much of [read any of] active algorithm analysis [at least as I see it]. But this field is both intriguing and intimidating. I want to conquer the fear, but every time I start out, the moment any analysis involves adv mathematical concepts, i lose focus, and never get back to it.

I guess, what I'm asking for is how did you overcome those feelings, and how to make learning algorithms and their analysis more fun and practical.

submitted by z1ha9
[link] [22 comments]

September 17, 2014 02:22 AM

StackOverflow

IntelliJ IDEA 13: new Scala SBT project hasn't src directory structure generated

I followed the getting start video on Jetbrains website to setup IntelliJ IDEA 13.1 Community Edition to work with Scala. Scala plugin v0.36.431 had been installed. While I created a new Scala SBT project with wizard, there was no src/ directory structure generated in the project. Only two sbt files were generated:

scala-course/
├── build.sbt
└── project
    └── plugins.sbt

From the video and other document I know that there should be a src/ directory structure, including src/main/scala, src/test/scala, etc. sbt uses the same directory structure as Maven for source files by default.

I can create those folders manually and mark it as source root. However it is trivial. So my question is: Why IntelliJ IDEA new project wizard doesn't generate the directory structure as said in document? Was I doing something wrong? I checked the preferences and couldn't find anything that seems related.

by aleung at September 17, 2014 01:49 AM

Wes Felter

TheoryOverflow

Do Genetic Algorithms Expect a Independent Search Space

Genetic Algorithms seem like multiple simulated annealing instances, augmented with a crossover genetic operator. The crossover operator selects predefined genes from two different parent solutions to create a new child solution for the next generation.

This seems to suggest that the search space is independent (a single gene will benefit a solution regardless of the other genes). Additionally, it seems like the user needs to choose axes along which the search space is separable.

However, genetic algorithms are often used in practice. Are my concerns a nonissue? Are most real world search spaces independent?

P.S. I would a define a two argument fitness function $F(x,y)$ as independent if

$$F(x,y) = G(x)+H(y)+\epsilon J(x,y)$$

for some small $\epsilon$. For example, $F(x,y) = -x^2-y^2+0.1xy$ would be an independent function. Intuitively, the arguments of an independent function interfere with each other minimally.

by Joshua at September 17, 2014 01:42 AM

arXiv Networking and Internet Architecture

Wireless networks appear Poissonian due to strong shadowing. (arXiv:1409.4739v1 [cs.NI])

Geographic locations of cellular base stations sometimes can be well fitted with spatial homogeneous Poisson point processes. In this paper we make a complementary observation: In the presence of the log-normal shadowing of sufficiently high variance, the statistics of the propagation loss of a single user with respect to different network stations are invariant with respect to their geographic positioning, whether regular or not, for a wide class of empirically homogeneous networks. Even in perfectly hexagonal case they appear as though they were realized in a Poisson network model, i.e., form an inhomogeneous Poisson point process on the positive half-line with a power-law density characterized by the path-loss exponent. At the same time, the conditional distances to the corresponding base stations become independent and log-normally distributed, which can be seen as a decoupling between the real and model geometry. The result applies also to Suzuki (Rayleigh-log-normal) propagation model. We use Kolmogorov-Smirnov test to empirically study the quality of the Poisson approximation and use it to build a linear-regression method for the statistical estimation of the value of the path-loss exponent.

by <a href="http://arxiv.org/find/cs/1/au:+Blaszczyszyn_B/0/1/0/all/0/1">Bartlomiej Blaszczyszyn</a> (INRIA Paris-Rocquencourt), <a href="http://arxiv.org/find/cs/1/au:+Karray_M/0/1/0/all/0/1">Mohamed Kadhem Karray</a> (FT R\&amp;D), <a href="http://arxiv.org/find/cs/1/au:+Keeler_H/0/1/0/all/0/1">Holger Paul Keeler</a> (INRIA Paris-Rocquencourt) at September 17, 2014 01:30 AM

Doing-it-All with Bounded Work and Communication. (arXiv:1409.4711v1 [cs.DC])

We consider the Do-All problem, where $p$ cooperating processors need to complete $t$ similar and independent tasks in an adversarial setting. Here we deal with a synchronous message passing system with processors that are subject to crash failures. Efficiency of algorithms in this setting is measured in terms of work complexity (also known as total available processor steps) and communication complexity (total number of point-to-point messages). When work and communication are considered to be comparable resources, then the overall efficiency is meaningfully expressed in terms of effort defined as work + communication. We develop and analyze a constructive algorithm that has work ${\cal O}( t + p \log p\, (\sqrt{p\log p}+\sqrt{t\log t}\, ) )$ and a nonconstructive algorithm that has work ${\cal O}(t +p \log^2 p)$. The latter result is close to the lower bound $\Omega(t + p \log p/ \log \log p)$ on work. The effort of each of these algorithms is proportional to its work when the number of crashes is bounded above by $c\,p$, for some positive constant $c < 1$. We also present a nonconstructive algorithm that has effort ${\cal O}(t + p ^{1.77})$.

by <a href="http://arxiv.org/find/cs/1/au:+Chlebus_B/0/1/0/all/0/1">Bogdan S. Chlebus</a>, <a href="http://arxiv.org/find/cs/1/au:+Gasieniec_L/0/1/0/all/0/1">Leszek G&#x105;sieniec</a>, <a href="http://arxiv.org/find/cs/1/au:+Kowalski_D/0/1/0/all/0/1">Dariusz R. Kowalski</a>, <a href="http://arxiv.org/find/cs/1/au:+Shvartsman_A/0/1/0/all/0/1">Alexander A. Shvartsman</a> at September 17, 2014 01:30 AM

Differentially Private Exponential Random Graphs. (arXiv:1409.4696v1 [stat.OT])

We propose methods to release and analyze synthetic graphs in order to protect privacy of individual relationships captured by the social network. Proposed techniques aim at fitting and estimating a wide class of exponential random graph models (ERGMs) in a differentially private manner, and thus offer rigorous privacy guarantees. More specifically, we use the randomized response mechanism to release networks under $\epsilon$-edge differential privacy. To maintain utility for statistical inference, treating the original graph as missing, we propose a way to use likelihood based inference and Markov chain Monte Carlo (MCMC) techniques to fit ERGMs to the produced synthetic networks. We demonstrate the usefulness of the proposed techniques on a real data example.

by <a href="http://arxiv.org/find/stat/1/au:+Karwa_V/0/1/0/all/0/1">Vishesh Karwa</a>, <a href="http://arxiv.org/find/stat/1/au:+Slavkovic_A/0/1/0/all/0/1">Aleksandra B. Slavkovi&#x107;</a>, <a href="http://arxiv.org/find/stat/1/au:+Krivitsky_P/0/1/0/all/0/1">Pavel Krivitsky</a> at September 17, 2014 01:30 AM

Position Auctions with Externalities and Brand Effects. (arXiv:1409.4687v1 [cs.GT])

This paper presents models for predicted click-through rates in position auctions that take into account two possibilities that are not normally considered---that the identities of ads shown in other positions may affect the probability that an ad in a particular position receives a click (externalities) and that some ads may be less adversely affected by being shown in a lower position than others (brand effects). We present a general axiomatic methodology for how click probabilities are affected by the qualities of the ads in the other positions, and illustrate that using these axioms will increase revenue as long as higher quality ads tend to be ranked ahead of lower quality ads. We also present appropriate algorithms for selecting the optimal allocation of ads when predicted click-through rates are governed by either the models of externalities or brand effects that we consider. Finally, we analyze the performance of a greedy algorithm of ranking the ads by their expected cost-per-1000-impressions bids when the true click-through rates are governed by our model of predicted click-through rates with brand effects and illustrate that such an algorithm will potentially cost as much as half of the total possible social welfare.

by <a href="http://arxiv.org/find/cs/1/au:+Hummel_P/0/1/0/all/0/1">Patrick Hummel</a>, <a href="http://arxiv.org/find/cs/1/au:+McAfee_R/0/1/0/all/0/1">R. Preston McAfee</a> at September 17, 2014 01:30 AM

Automatic Error Localization for Software using Deductive Verification. (arXiv:1409.4637v1 [cs.LO])

Even competent programmers make mistakes. Automatic verification can detect errors, but leaves the frustrating task of finding the erroneous line of code to the user. This paper presents an automatic approach for identifying potential error locations in software. It is based on a deductive verification engine, which detects errors in functions annotated with pre- and post-conditions. Using an automatic theorem prover, our approach finds expressions in the code that can be modified such that the program satisfies its specification. Scalability is achieved by analyzing each function in isolation. We have implemented our approach in the widely used Frama-C framework and present first experimental results. This is an extended version of [8], featuring an additional appendix.

by <a href="http://arxiv.org/find/cs/1/au:+Koenighofer_R/0/1/0/all/0/1">Robert Koenighofer</a>, <a href="http://arxiv.org/find/cs/1/au:+Toegl_R/0/1/0/all/0/1">Ronald Toegl</a>, <a href="http://arxiv.org/find/cs/1/au:+Bloem_R/0/1/0/all/0/1">Roderick Bloem</a> at September 17, 2014 01:30 AM

Laboratory Test Bench for Research Network and Cloud Computing. (arXiv:1409.4626v1 [cs.DC])

At present moment, there is a great interest in development of information systems operating in cloud infrastructures. Generally, many of tasks remain unresolved such as tasks of optimization of large databases in a hybrid cloud infrastructure, quality of service (QoS) at different levels of cloud services, dynamic control of distribution of cloud resources in application systems and many others. Research and development of new solutions can be limited in case of using emulators or international commercial cloud services, due to the closed architecture and limited opportunities for experimentation. Article provides answers to questions on the establishment of a pilot cloud practically "at home" with the ability to adjust the width of the emulation channel and delays in data transmission. It also describes architecture and configuration of the experimental setup. The proposed modular structure can be expanded by available computing power.

by <a href="http://arxiv.org/find/cs/1/au:+Pluzhnik_E/0/1/0/all/0/1">Evgeniy Pluzhnik</a>, <a href="http://arxiv.org/find/cs/1/au:+Nikulchev_E/0/1/0/all/0/1">Evgeny Nikulchev</a>, <a href="http://arxiv.org/find/cs/1/au:+Payain_S/0/1/0/all/0/1">Simon Payain</a> at September 17, 2014 01:30 AM

A new Watermarking Technique for Medical Image using Hierarchical Encryption. (arXiv:1409.4587v1 [cs.CR])

In recent years, characterized by the innovation of technology and the digital revolution, the field of media has become important. The transfer and exchange of multimedia data and duplication have become major concerns of researchers. Consequently, protecting copyrights and ensuring service safety is needed. Cryptography has a specific role, is to protect secret files against unauthorized access. In this paper, a hierarchical cryptosystem algorithm based on Logistic Map chaotic systems is proposed. The results show that the proposed method improves the security of the image. Experimental results on a database of 200 medical images show that the proposed method significantly gives better results.

by <a href="http://arxiv.org/find/cs/1/au:+Abdmouleh_M/0/1/0/all/0/1">Med Karim Abdmouleh</a>, <a href="http://arxiv.org/find/cs/1/au:+Khalfallah_A/0/1/0/all/0/1">Ali Khalfallah</a>, <a href="http://arxiv.org/find/cs/1/au:+Bouhlel_M/0/1/0/all/0/1">Med Salim Bouhlel</a> at September 17, 2014 01:30 AM

Improving files availability for BitTorrent using a diffusion model. (arXiv:1409.4565v1 [cs.NI])

The BitTorrent mechanism effectively spreads file fragments by copying the rarest fragments first. We propose to apply a mathematical model for the diffusion of fragments on a P2P in order to take into account both the effects of peer distances and the changing availability of peers while time goes on. Moreover, we manage to provide a forecast on the availability of a torrent thanks to a neural network that models the behaviour of peers on the P2P system. The combination of the mathematical model and the neural network provides a solution for choosing file fragments that need to be copied first, in order to ensure their continuous availability, counteracting possible disconnections by some peers.

by <a href="http://arxiv.org/find/cs/1/au:+Napoli_C/0/1/0/all/0/1">Christian Napoli</a>, <a href="http://arxiv.org/find/cs/1/au:+Pappalardo_G/0/1/0/all/0/1">Giuseppe Pappalardo</a>, <a href="http://arxiv.org/find/cs/1/au:+Tramontana_E/0/1/0/all/0/1">Emiliano Tramontana</a> at September 17, 2014 01:30 AM

Pricing Mobile Data Offloading: A Distributed Market Framework. (arXiv:1409.4560v1 [cs.NI])

Mobile data offloading is an emerging technology to avoid congestion in cellular networks and improve the level of user satisfaction. In this paper, we develop a distributed market framework to price the offloading service, and conduct a detailed analysis of the incentives for offloading service providers and conflicts arising from the interactions of different participators. Specifically, we formulate a multi-leader multi-follower Stackelberg game (MLMF-SG) to model the interactions between the offloading service providers and the offloading service consumers in the considered market framework, and investigate the cases where the offloading capacity of APs is unlimited and limited, respectively. For the case without capacity limit, we decompose the followers' game of the MLMF-SG (FG-MLMF-SG) into a number of simple follower games (FGs), and prove the existence and uniqueness of the equilibrium of the FGs from which the existence and uniqueness of the FG-MLMF-SG also follows. For the leaders' game of the MLMF-SG, we also prove the existence and uniqueness of the equilibrium. For the case with capacity limit, by considering a symmetric strategy profile, we establish the existence and uniqueness of the equilibrium of the corresponding MLMF-SG, and present a distributed algorithm that allows the leaders to achieve the equilibrium. Finally, extensive numerical experiments demonstrate that the Stackelberg equilibrium is very close to the corresponding social optimum for both considered cases.

by <a href="http://arxiv.org/find/cs/1/au:+Wang_K/0/1/0/all/0/1">Kehao Wang</a>, <a href="http://arxiv.org/find/cs/1/au:+Lau_F/0/1/0/all/0/1">Francis C. M. Lau</a>, <a href="http://arxiv.org/find/cs/1/au:+Chen_L/0/1/0/all/0/1">Lin Chen</a>, <a href="http://arxiv.org/find/cs/1/au:+Schober_R/0/1/0/all/0/1">Robert Schober</a> at September 17, 2014 01:30 AM

The Q-curve construction for endomorphism-accelerated elliptic curves. (arXiv:1409.4526v1 [cs.CR])

We give a detailed account of the use of \(\mathbb{Q}\)-curve reductions to construct elliptic curves over \(\mathbb{F}_{p^2}\) with efficiently computable endomorphisms, which can be used to accelerate elliptic curve-based cryptosystems in the same way as Gallant--Lambert--Vanstone (GLV) and Galbraith--Lin--Scott (GLS) endomorphisms. Like GLS (which is a degenerate case of our construction), we offer the advantage over GLV of selecting from a much wider range of curves, and thus finding secure group orders when \(p\) is fixed for efficient implementation. Unlike GLS, we also offer the possibility of constructing twist-secure curves. We construct several one-parameter families of elliptic curves over \(\mathbb{F}_{p^2}\) equipped with efficient endomorphisms for every \(p > 3\), and exhibit examples of twist-secure curves over \(\mathbb{F}_{p^2}\) for the efficient Mersenne prime \(p = 2^{127}-1\).

by <a href="http://arxiv.org/find/cs/1/au:+Smith_B/0/1/0/all/0/1">Benjamin Smith</a> (INRIA Saclay - Ile de France, LIX) at September 17, 2014 01:30 AM

Scalable and Efficient Self-Join Processing technique in RDF data. (arXiv:1409.4507v1 [cs.DB])

Efficient management of RDF data plays an important role in successfully understanding and fast querying data. Although the current approaches of indexing in RDF Triples such as property tables and vertically partitioned solved many issues; however, they still suffer from the performance in the complex self-join queries and insert data in the same table. As an improvement in this paper, we propose an alternative solution to facilitate flexibility and efficiency in that queries and try to reach to the optimal solution to decrease the self-joins as much as possible, this solution based on the idea of "Recursive Mapping of Twin Tables". Our main goal of Recursive Mapping of Twin Tables (RMTT) approach is divided the main RDF Triple into two tables which have the same structure of RDF Triple and insert the RDF data recursively. Our experimental results compared the performance of join queries in vertically partitioned approach and the RMTT approach using very large RDF data, like DBLP and DBpedia datasets. Our experimental results with a number of complex submitted queries shows that our approach is highly scalable compared with RDF-3X approach and RMTT reduces the number of self-joins especially in complex queries 3-4 times than RDF-3X approach

by <a href="http://arxiv.org/find/cs/1/au:+Sayed_A/0/1/0/all/0/1">Awny Sayed</a>, <a href="http://arxiv.org/find/cs/1/au:+Almaqrashi_A/0/1/0/all/0/1">Amal Almaqrashi</a> at September 17, 2014 01:30 AM

Audit Games with Multiple Defender Resources. (arXiv:1409.4503v1 [cs.GT])

Modern organizations (e.g., hospitals, social networks, government agencies) rely heavily on audit to detect and punish insiders who inappropriately access and disclose confidential information. Recent work on audit games models the strategic interaction between an auditor with a single audit resource and auditees as a Stackelberg game, augmenting associated well-studied security games with a configurable punishment parameter. We significantly generalize this audit game model to account for multiple audit resources where each resource is restricted to audit a subset of all potential violations, thus enabling application to practical auditing scenarios. We provide an FPTAS that computes an approximately optimal solution to the resulting non-convex optimization problem. The main technical novelty is in the design and correctness proof of an optimization transformation that enables the construction of this FPTAS. In addition, we experimentally demonstrate that this transformation significantly speeds up computation of solutions for a class of audit games and security games.

by <a href="http://arxiv.org/find/cs/1/au:+Blocki_J/0/1/0/all/0/1">Jeremiah Blocki</a>, <a href="http://arxiv.org/find/cs/1/au:+Christin_N/0/1/0/all/0/1">Nicolas Christin</a>, <a href="http://arxiv.org/find/cs/1/au:+Datta_A/0/1/0/all/0/1">Anupam Datta</a>, <a href="http://arxiv.org/find/cs/1/au:+Procaccia_A/0/1/0/all/0/1">Ariel Procaccia</a>, <a href="http://arxiv.org/find/cs/1/au:+Sinha_A/0/1/0/all/0/1">Arunesh Sinha</a> at September 17, 2014 01:30 AM

Toward Fully-Shared Access: Designing ISP Service Plans Leveraging Excess Bandwidth Allocation. (arXiv:1409.4499v1 [cs.NI])

Shaping subscriber traffic based on token bucket filter (TBF) by Internet service providers (ISPs) results in waste of network resources in shared access when there are few active subscribers, because it cannot allocate excess bandwidth in the long term. New traffic control schemes have been recently proposed to allocate excess bandwidth among active subscribers proportional to their token generation rates. In this paper we report the current status of our research on designing flexible yet practical service plans exploiting excess bandwidth allocation enabled by the new traffic control schemes in shared access networks, which are attractive to both ISP and its subscribers in terms of revenue and quality of service (QoS) and serve as a stepping stone to fully-shared access in the future.

by <a href="http://arxiv.org/find/cs/1/au:+Kim_K/0/1/0/all/0/1">Kyeong Soo Kim</a> at September 17, 2014 01:30 AM

Adaptive Content Control for Communication amongst Cooperative Automated Vehicles. (arXiv:1409.4470v1 [cs.NI])

Cooperative automated vehicles exchange information to assist each other in creating a more precise and extended view of their surroundings, with the aim of improving automated-driving decisions. This paper addresses the need for scalable communication among these vehicles. To this end, a general communication framework is proposed through which automated cars exchange information derived from multi-resolution maps created using their local sensing modalities. This method can extend the region visible to a car beyond the area directly sensed by its own sensors. An adaptive, probabilistic, distance-dependent strategy is proposed that controls the content of the messages exchanged among vehicles based on performance measures associated with the load on the communication channel.

by <a href="http://arxiv.org/find/cs/1/au:+Fanaei_M/0/1/0/all/0/1">Mohammad Fanaei</a>, <a href="http://arxiv.org/find/cs/1/au:+Tahmasbi_Sarvestani_A/0/1/0/all/0/1">Amin Tahmasbi-Sarvestani</a>, <a href="http://arxiv.org/find/cs/1/au:+Fallah_Y/0/1/0/all/0/1">Yaser P. Fallah</a>, <a href="http://arxiv.org/find/cs/1/au:+Bansal_G/0/1/0/all/0/1">Gaurav Bansal</a>, <a href="http://arxiv.org/find/cs/1/au:+Valenti_M/0/1/0/all/0/1">Matthew C. Valenti</a>, <a href="http://arxiv.org/find/cs/1/au:+Kenney_J/0/1/0/all/0/1">John B. Kenney</a> at September 17, 2014 01:30 AM

Faster Existential FO Model Checking on Posets. (arXiv:1409.4433v1 [cs.LO])

We prove that the model checking problem for the existential fragment of first order (FO) logic on partially ordered sets is fixed-parameter tractable (FPT) with respect to the formula and the width of a poset (the maximum size of an antichain). While there is a long line of research into FO model checking on graphs, the study of this problem on posets has been initiated just recently by Bova, Ganian and Szeider (LICS 2014), who proved that the existential fragment of FO has an FPT algorithm for a poset of fixed width. We improve upon their result in two ways: (1) the runtime of our algorithm is $O(f(|{\phi}|,w)\cdot n^2)$ on n-element posets of width $w$, compared to $O(g(|{\phi}|)\cdot n^{h(w)})$ of Bova et al., and (2) our proofs are simpler and easier to follow. We complement this result by showing that, under a certain complexity-theoretical assumption, the existential FO model checking problem does not have a polynomial kernel.

by <a href="http://arxiv.org/find/cs/1/au:+Gajarsky_J/0/1/0/all/0/1">Jakub Gajarsk&#xfd;</a>, <a href="http://arxiv.org/find/cs/1/au:+Hlineny_P/0/1/0/all/0/1">Petr Hlin&#x11b;n&#xfd;</a>, <a href="http://arxiv.org/find/cs/1/au:+Obdrzalek_J/0/1/0/all/0/1">Jan Obdr&#x17e;&#xe1;lek</a>, <a href="http://arxiv.org/find/cs/1/au:+Ordyniak_S/0/1/0/all/0/1">Sebastian Ordyniak</a> at September 17, 2014 01:30 AM

Approximately Optimal Mechanisms for Strategyproof Facility Location: Minimizing $L_p$ Norm of Costs. (arXiv:1305.2446v2 [cs.GT] UPDATED)

We consider the problem of locating a single facility on the real line. This facility serves a set of agents, each of whom is located on the line, and incurs a cost equal to his distance from the facility. An agent's location is private information that is known only to him. Agents report their location to a central planner who decides where to locate the facility. The planner's objective is to minimize a "social" cost function that depends on the agent-costs. However, agents might not report truthfully; to address this issue, the planner must restrict himself to {\em strategyproof} mechanisms, in which truthful reporting is a dominant strategy for each agent. A mechanism that simply chooses the optimal solution is generally not strategyproof, and so the planner aspires to use a mechanism that effectively {\em approximates} his objective function. In our paper, we study the problem described above with the social cost function being the $L_p$ norm of the vector of agent-costs. We show that the median mechanism (which is known to be strategyproof) provides a $2^{1-\frac{1}{p}}$ approximation ratio, and that is the optimal approximation ratio among all deterministic strategyproof mechanisms. For randomized mechanisms, we present two results. First, we present a negative result: we show that for integer $\infty>p>2$, no mechanism---from a rather large class of randomized mechanisms--- has an approximation ratio better than that of the median mechanism. This is in contrast to the case of $p=2$ and $p=\infty$ where a randomized mechanism provably helps improve the worst case approximation ratio. Second, for the case of 2 agents, we show that a mechanism called LRM, first designed by Procaccia and Tennenholtz for the special case of $L_{\infty}$, provides the optimal approximation ratio among all randomized mechanisms.

by <a href="http://arxiv.org/find/cs/1/au:+Feigenbaum_I/0/1/0/all/0/1">Itai Feigenbaum</a>, <a href="http://arxiv.org/find/cs/1/au:+Sethuraman_J/0/1/0/all/0/1">Jay Sethuraman</a>, <a href="http://arxiv.org/find/cs/1/au:+Ye_C/0/1/0/all/0/1">Chun Ye</a> at September 17, 2014 01:30 AM

A Simple Algorithm for Global Value Numbering. (arXiv:1303.1880v2 [cs.PL] UPDATED)

Global Value Numbering(GVN) is a method for detecting redundant computations in programs. Here, we introduce the problem of Global Value Numbering in its original form, as conceived by Kildall(1973), and present an algorithm which is a simpler variant of Kildall's. The algorithm uses the concept of value expression - an abstraction of a set of expressions - enabling a representation of the equivalence information which is compact and simple to manipulate.

by <a href="http://arxiv.org/find/cs/1/au:+Saleena_N/0/1/0/all/0/1">Nabizath Saleena</a>, <a href="http://arxiv.org/find/cs/1/au:+Paleri_V/0/1/0/all/0/1">Vineeth Paleri</a> at September 17, 2014 01:30 AM

StackOverflow

Java passing custom objects to Clojure

I have a .java file that will be called for its public String solve() method to answer a problem. The method receives project-defined Java class RP, which contains a collection of RF, which each contain a collection of RO, which each contain several RA which, finally, boil down to String, String pairs (a name and a value). My question is, (how) can I have that solve() method pass its RP object to Clojure where I believe I can do all the work to generate a solution more effectively, and eventually return a String solution back?

EDIT: What I'm looking for is some way of saying, String answer = toClojure(RP); and in Clojure I'll be able to do the equivalent of RP.getRF().getRO().getRA().getName(), where each of these functions is defined in the Java classes.

by WorldsEndless at September 17, 2014 01:22 AM

What is the issue with this sbt file?

When I import a SBT project into intelliJ, the build.sbt file is showing lot of errors as shown in the following screenshot. Wondering what might be the issue

IDEA Version 13.1.4

I also see the following

The following source roots are outside of the corresponding base directories:
C:\Users\p0c\Downloads\recfun\src\main\resources
C:\Users\p0c\Downloads\recfun\src\test\java
C:\Users\p0c\Downloads\recfun\src\test\scala
C:\Users\p0c\Downloads\recfun\src\test\resources
These source roots cannot be included in the IDEA project model. Please consider using shared SBT projects instead of shared source roots.

enter image description here

by Pangea at September 17, 2014 01:05 AM

How can I get the behavior of GNU's readlink -f on a Mac?

On Linux, the readlink utility accepts an option -f that follows additional links. This doesn't seem to work on Mac and possibly BSD based systems. What would the equivalent be?

Here's some debug information:

$ which readlink; readlink -f
/usr/bin/readlink
readlink: illegal option -f
usage: readlink [-n] [file ...]

by troelskn at September 17, 2014 01:01 AM

Halfbakery

StackOverflow

How can I access values outside of Spark GraphX .map loop?

Brand new to Apache Spark and I'm a little confused how to make updates to a value that sits outside of a .mapTriplets iteration in GraphX. See below:

def mapTripletsMethod(edgeWeights: Graph[Int, Double], stationaryDistribution: Graph[Double, Double]) = {
  val tempMatrix: SparseDoubleMatrix2D = graphToSparseMatrix(edgeWeights)

  stationaryDistribution.mapTriplets{ e =>
      val row = e.srcId.toInt
      val column = e.dstId.toInt
      var cellValue = -1 * tempMatrix.get(row, column) + e.dstAttr
      tempMatrix.set(row, column, cellValue) // this doesn't do anything to tempMatrix
      e
    }
}

I'm guessing this is due to the design of an RDD and there's no simple way to update the tempMatrix value. When I run the above code the tempMatrix.set method does nothing. It was rather difficult to try to follow the problem in the debugger.

Does anyone have an easy solution? Thank you!

Edit

I've made an update above to show that stationaryDistribution is a graph RDD.

by crockpotveggies at September 17, 2014 12:48 AM

Planet Theory

Vertex Guarding in Weak Visibility Polygons

Authors: Pritam Bhattacharya, Subir Kumar Ghosh, Bodhayan Roy
Download: PDF
Abstract: The art gallery problem enquires about the least number of guards that are sufficient to ensure that an art gallery, represented by a polygon $P$, is fully guarded. In 1998, the problems of finding the minimum number of point guards, vertex guards, and edge guards required to guard $P$ were shown to be APX-hard by Eidenbenz, Widmayer and Stamm. In 1987, Ghosh presented approximation algorithms for vertex guards and edge guards that achieved a ratio of $\mathcal{O}(\log n)$, which was improved up to $\mathcal{O}(\log\log OPT)$ by King and Kirkpatrick in 2011. It has been conjectured that constant-factor approximation algorithms exist for these problems. We settle the conjecture for the special class of polygons that are weakly visible from an edge and contain no holes by presenting a 6-approximation algorithm for finding the minimum number of vertex guards that runs in $\mathcal{O}(n^2)$ time. On the other hand, when holes are allowed, we show that the problem of vertex guarding a weak visibility polygon is APX-hard by constructing a reduction from the Set Cover problem.

September 17, 2014 12:41 AM

Fair and Square: Cake-cutting in Two Dimensions

Authors: Erel Segal-Halevi, Avinatan Hassidim, Yonatan Aumann
Download: PDF
Abstract: We consider the problem of fairly dividing a two dimensional heterogeneous good among multiple players. Applications include division of land as well as ad space in print and electronic media. Classical cake cutting protocols primarily consider a one-dimensional resource, or allocate each player multiple infinitesimally small "pieces". In practice, however, the two dimensional \emph{shape} of the allotted piece is of crucial importance in many applications (e.g. squares or bounded aspect-ratio rectangles are most useful for building houses, as well as advertisements). We thus introduce and study the problem of fair two-dimensional division wherein the allotted plots must be of some restricted two-dimensional geometric shape(s). Adding this geometric constraint re-opens most questions and challenges related to cake-cutting. Indeed, even the elementary \emph{proportionality} fairness criteria can no longer be guaranteed in all cases. In this paper we thus examine the \emph{level} of proportionality that \emph{can} be guaranteed, providing both impossibility results (for proportionality that cannot be guaranteed), and algorithmic constructions (for proportionality that can be guaranteed). We focus primarily on the case when the cake is a rectilinear polygon and the allotted plots must be squares or bounded aspect-ratio rectangles.

September 17, 2014 12:41 AM

Similarity of closed polygonal curves in Frechet metric

Authors: M. I. Schlesinger, E. V. Vodolazskiy, V. M. Yakovenko
Download: PDF
Abstract: The article analyzes similarity of closed polygonal curves in Frechet metric, which is stronger than the well-known Hausdorff metric and therefore is more appropriate in some applications. An algorithm that determines whether the Frechet distance between two closed polygonal curves with m and n vertices is less than a given number is described. The described algorithm takes O(mn) time whereas the previously known algorithms take O(mn log(mn)) time.

September 17, 2014 12:40 AM

A Note on Rectangle Covering with Congruent Disks

Authors: Emanuele Tron
Download: PDF
Abstract: In this note we prove that, if $S_n$ is the greatest area of a rectangle which can be covered with $n$ unit disks, then $2\leq S_n/n<3 \sqrt{3}/2$, and these are the best constants; moreover, for $\Delta(n):=(3\sqrt{3}/2)n-S_n$, we have $0.727384<\liminf\Delta(n)/\sqrt{n}<2.121321$ and $0.727384<\limsup\Delta(n)/\sqrt{n}<4.165064$.

September 17, 2014 12:40 AM

Projective clone homomorphisms

Authors: Manuel Bodirsky, Michael Pinsker, András Pongrácz
Download: PDF
Abstract: It is known that a countable $\omega$-categorical structure interprets all finite structures primitively positively if and only if its polymorphism clone maps to the clone of projections on a two-element set via a continuous clone homomorphism. We investigate the relationship between the existence of a clone homomorphism to the projection clone, and the existence of such a homomorphism which is continuous and thus meets the above criterion.

September 17, 2014 12:40 AM

StackOverflow

Om not reflecting changes even after swap! app-state

Using Light Table, how do I tell Om to re-render the DOM after eval'ing a modified Om function?

Forcing a swap! on the main state atom has no effect: (swap! app-state identity)

Cycling routes explicitly with (swap! app-state assoc :current-page :about) and back to home with (swap! app-state assoc :current-page :home), reflect changes to the home page.

My browser is connected to Light Table and I can trigger alerts with, e.g. (js/alert "hi")

Calling the root again also triggers a render:

(root app app-state
      {:target (. js/document
                  (getElementById "site"))})

Why doesn't Om trigger a render on app-state atom swap!?

by pate at September 17, 2014 12:16 AM

How to exit a program properly when using Scalaz Futures and the timed function

This works as expected:

object Planexecutor extends App {    
  import scalaz.concurrent.Future
  import scala.concurrent.duration._

  val f = Future.apply(longComputation)
  val result = f.run
  println(result)
}

This does not:

object Planexecutor extends App {    
  import scalaz.concurrent.Future
  import scala.concurrent.duration._

  val f = Future.apply(longComputation).timed(1.second)
  val result = f.run
  println(result)
}

In the first case, the application exits normally whereas in the second case it does not. However, both versions properly print out the result value.

Is this a bug or is there something I am not understanding?

by UndercoverAgent at September 17, 2014 12:05 AM

Does this function make use of haskell's lazy evaluation

I wrote the following function to decide if a number is prime or not.

isPrime :: Int -> Bool
isPrime n = and (map (\x -> (n `mod` x > 0))[2..(intSquareRoot n)])

intSquareRoot :: Int -> Int
intSquareRoot n = intSq n
  where
    intSq x
      | x*x > n = intSq (x - 1)
      | otherwise = x

I just got started with using Haskell so this piece of code may be hideous to anyone who is trained in using it. However, I am curious if this piece of code makes use of Haskell's lazy evaluation. this part

(map (\x -> (n `mod` x > 0))[2..(intSquareRoot n)])

will create a list of booleans, if just one of these is False (so if a number between 2 and the sqrt of n divides n) then the whole thing is False using the 'and' function. But I think the whole list will be created first and then the 'and' function will be used. Is this true? If so, how can I make this faster by using lazy evaluation, so that the function stops and returns false once it finds the first divisor of n. Thanks in advance for any help!

by Slugger at September 17, 2014 12:00 AM

Planet Clojure

Clojure kilpailuetuna ja muita syitä käyttää Clojurea

This content is available only in Finnish.

Kuudennessa Flowa-podcast-episodissa aiheena on ohjelmointikielten aateliin kuuluva, Javan virtuaalikoneen päällä toimivat LISP-variantti Clojure. Podcastissä selviää muun muassa, miksi Clojure on kilpailuetu niille yrityksille, jotka sitä käyttävät.

Vieraana on tällä kertaa itsenäinen softakehittäjä ja Clojure Cupin pääjärjestäjä Tero Parviainen (@teropa) sekä Flowan Clojure-guru Tero Kadenius (@pisketti). Haastattelemassa Teroja on vasta Clojure-fanboy-asteella oleva funktio-ohjelmointi-intoilija Ari-Pekka Lappi (@ilmirajat).

Tämänkertaisessa episodissa aiemmista episodeista tuttua anti-agile jäärää tuuraa Olli Olioguru, joka epäilee, että funktio-ohjelmointi on aivan liian vaikeaa junnukoodaajille sekä päivittelee sitä, kuinka halpakoodausmaista tuleva Java-koodikin on kuraa eikä siten uskalla edes ajatella offshore-Clojure-koodin kryptisyyttä. Podcast-isäntä Ari-Pekka haastaa myös vieraitaan sillä ikävällä tosiasialla, että Javan virtuaalikonetta on kehitetty pitkälti vain Javan ehdoilla ja muut Javan virtuaalikoneen päällä toimivat kielet kärsivät siitä.

Clojure guruilun lisäksi Tero Parviainen on hyvin perillä siitä mitä frontend-kehityspuolelle kuuluu. Hän on kirjoittamassa kirjaa otsikolla ”Build Your Own AngularJS” ja häneltä on julkaistu teos otsikolla ”Real-time Web Application Development using Vert.x 2.0”. Taustalla ennen Clojurea Tero Parviaisella on Java ja Ruby. Podcastissa Tero Parviainen paljastaa, että vielä vuosi sitten hän ei olisi vakavissaan edes harkinnut Clojurea frontissa, mutta nyt tilanne on muuttunut, sillä Clojure-rintamalla on tapahtunut todella paljon erityisesti frontend -kehityksen saralla. Tätä nykyä käyttötapauksia, missä Parviainen ei käyttäisi Clojurea on aina vaan vähemmän ja vähemmän. Lisää tietoa Terosta löytyy osoitteesta http://teropa.info.

Tero Kadeniuksen taustat ovat vahvimmin Java-ohjelmoinnissa. Clojureen hän pääsi tutustumaan ensimmäistä kertaa noin neljä vuotta sitten. Vapaa-ajan Clojure-projektien lisäksi taustalla on vuoden mittainen Clojure-työprojekti. Myös Tero on kirjoittamassa kirjaa, mutta aiheena on Agile- ja Lean-sovelluskehitys. Näillä näkyminen kirja on tulossa julki vielä tämän vuoden aikana.

Mitä, mikä, miksi?

00:47
Tero Parviainen, kerroko lyhyesti, mikä on Clojure Cup?

02:15
Mihin Clojure käytetään ja mihin sitä kannattaisi käyttää? Onko teillä mainita jotain referenssejä tuotteista tai palveluista, jotka on rakennettu Clojurella?
09:47
Javan virtuaalikonetta on kehitetty pitkälti vain Javan ehdoilla ja muut Javan virtuaalikoneen päällä toimivat kielet kärsivät siitä. Mitä mieltä olette tästä väitteestä?

Skenaarioita ja haastamista

11:43
Skenaario 1: Junnukoodaajat eivät opi funktio-ohjelmointia? Siksi kannattaa enemmin koodata Javaa tai C#:ia tai muuta olio-orientunutta ohjelmointikieltä.
15:37
Skenaario 2: Me ollaan ulkoistettu suurin osa softakehityksestä. En nyt mainitse maata vaan sanon ainoastaan, että sieltä tuleva Java-koodi on parhaimmillaan toimivaa ja pahimmillaan bugista roskaa. Jos sieltä tulisi Clojure-koodia, se olisi lisäksi kryptistä kuin mikä. Java ja C# ja muut kunnon olio-orientoituneet kielet pakottavat kirjoittamaan selkeämpää koodia
18:32
Skenaario 3: Hyvin kirjoitettu olio-orientoitunut koodi on paljon selkeämpää ja helpommin laajennettavaa kuin funtionaalinen hieroglyfisekamelska. Koodin pitää olla ylläpidettävää ja siksi ei kannata käyttää Clojurea.
25:09
Skenaario 4: Meillä yksinkertaisesti ole varaa opetella taas uutta kieltä.

Käytännössä

32:44
Miten on työkalujen ja alustojen laita? Miltä näyttää Clojuren tekninen työkalupakki?
41:34
Entä tietokannat?
48:11
Miltä näyttää Clojuren tulevaisuus?

by Flowa at September 17, 2014 12:00 AM

Planet Clojure

Programming without objects

Programming without objects

17.09.2014 Permalink

Recently I gave an itemis internal talk about basic functional programming (FP) concepts. Towards the end I made a claim that objects and their blueprints (a.k.a. classes) have severe downsides and that it is much better to "leave data alone", which is exactly how Clojure and Haskell treat this subject.

I fully acknowledge that such a statement is disturbing to hear, especially if your professional thinking was shaped by OOP paradigm1 for 20 years or more. I know this well, because that was exactly my situation two years ago.

So let's take a closer look at the OO class as provided by Java, Scala or C# and think a little bit about it's features, strengths and weaknesses. Subsequently, I'll explain how to do better without classes and objects.

The OO class

An OO class bundles data and implementation related to this data. A good OO class hides any details about implementation and data, if they're not supposed to be part of the API. According to widespread belief, the good OO class reduces complexity because any client using instances of it (a.k.a the objects) will not and cannot deal with those implementation details.

So, we can see three striking aspects here: API as formed by the public members, data declaration and method implementation. But there is much more to it:

  • Instances of a class have identity by default. Two objects are the same if and only if both share the same address in memory which gives us a relation of intensional equality on their data.
  • The reference to an object is passed as implicit first argument to any method invocation on the object, available through this or self.
  • A class is a type.
  • Inheritance between classes enables subtyping and allows subclasses to reuse method implementations and data declarations of their superclasses.
  • Methods can be "polymorphic" which only means that in order to invoke a method objects do a dispatch over the type of the implicit first argument.
  • Classes can have type variables to generalize their functionality over a range of other types.
  • Classes act as namespaces for inner classes and static members.
  • Classes can support already existing contracts of abstract types (in Java called "interfaces") by implementing those.

That's a lot of stuff, and I probably did not list everything that classes provide. The first thing we can state is that a class is not "simple" in the sense that it does only one thing. In fact, it does a lot of things. But so what? After all, the concept is so old and so widely known that we may accept it anyway.

Besides the class being a complex thing there are other disadvantages that most OO programmers are somehow aware of but tend to accept like laws of nature2.

Next I'll discuss a few of these weaknesses in detail:

Equality

Let's look at intensional equality as provided by object identity. Apparently this is not what we want at all times, otherwise we wouldn't see this pattern called ValueObject 3. A type like Javas String is a well-known example of a successful ValueObject implementation. The price for this freedom of choice is a rule that the == operator is almost useless and we must use equals because only it's implementation can decide what kind of equality relation is in effect. The numerous tutorials and blog posts how to correctly implement equals and hashCode are witness that this topic is not easy to get right. In addition we must be very careful when using a potentially mutable object as a key in a map or item in a set. If it's not a value a lookup might fail miserably (and unexpectedly).

Concurrency

The conflation of mutable data and object identity is also a problem in another regard: concurrency. Those objects not acting as values must be protected from race conditions and stale memory, either by thread confinement or by synchronization which is hard to do right in order to avoid dead locks or live locks.

This

Let's turn to the implicit first argument, in Java and C# it's referred to by the keyword this. It allows us to invoke methods on an object Sam by stating Sam.doThis() or Sam.doThat() which sounds similar to natural language. So, we can consider this as syntactic sugar to ease understanding in cases where a "subject verb" formulation makes sense4. But wait, what happens if we're not able to use a method implemented in a class? More recent languages offer extension methods (in Scala realizable using implicit) to help make our notation more consistent. Thus, in order to put the first argument in front of the method name to improve readability we're required to master an additional concept.

Dispatch

Objects support polymorphism, which was the really exotic feature back then when I learned OOP. Actually, the OO "polymorphic method invocation" is a special case of a function invocation dispatch, namely a dispatch over the type of the implicit first argument. So this fancy p-thing turns out to be a somewhat arbitrary limitation of a very powerful concept.

Inheritance

Implementation inheritance creates a "is-a" relation between types (giving birth to terms like "superclass" and "subclass"), shares API, and hard-wires this with reuse of data declarations and method implementations. But what can you do if you want one without the other? Well, if you don't want to reuse but only define type relations and share API you can employ interfaces, which is simple and effective. Now the other way around: what can you do to reuse implementations without tying a class by extends to an existing class? The options I know are:

  • Use static methods that expect a type declared by an interface (which might force some data members to become publicly accessible). Here again, extension methods might jump in to reduce syntatic disturbances.
  • Use delegation, so the class in need for existing implementations points to a class containing the shared methods. Each delegating method invokes the implementation from the other class, which creates much boilerplate if the number of methods is high.

Implementation inheritance is a very good example for adding much conceptual complexity without giving us the full power of the ideas it builds upon. In fact, the debate about the potential harm that it brings (and how other approaches might remedy this) is nearly as old as the idea itself.

Encapsulation of data

"Information hiding" makes objects opaque, that's it's purpose. It seems useful if objects contain private mutable data necessary to implement stateful behaviour. However, with respect to concurrency and more algebraic data transformation this encapsulation hurts. It hurts because we don't know what's underneath an objects API and the last thing we want to have is mutable state that we're not aware of. It hurts that if we want to work with it's data we have to ask for every single bit of it.

The common solution to the latter problem is to use reflection, and that's exactly what all advanced frameworks do to uniformly implement their behaviour on top of user-defined classes that contain data.

On the other hand, we all have learned that reflection is dangerous, it circumvents the type system, which may lead to all kinds of errors showing up only at runtime. So most programmers resort to work with objects in a very tedious way, creating many times more code compared to what they would have written in a different language.

Judging the OO class

So far, we have seen that OO classes are not ideal. They mix up a lot of features while often providing only a narrow subset of what a more general concept is able to do for us. Resulting programs are not only hard to do right, in terms of lines of code they tend to require a multiple compared to programs written in modern FP languages. Since system size is a significant cost driver in maintenance, OOP economically seems to me as a mistake when you have other techniques at your disposal.

When it was introduced more than 20 years ago the OO class brought encapsulation and polymorphism into the game which was beneficial. Thus, OOP was clearly better than procedural programming. But a lot of time has passed since then and we learned much about all the little and not-so-little flaws. It's time to move on...

A better way

Now, that I have discredited the OO class as offered to you by Java, C# or other mainstream OOP languages I feel the duty to show you an alternative as it is available in Clojure and Haskell. It's based on the following fundamentals:

Data is immutable, and any complex data structure (regardless of being more like a map or a list) behaves like a value: you can't change it. But you can get new values that share a lot of data with what previously existed. The key to make this approach feasible are so-called "persistent data structures". They combine immutability with efficiency.

Data is commonly structured using very few types of collections: in Clojure these are vectors, maps and sets, in Haskell you have lists, maps, sets and tuples. To combine a type with a map-like behaviour Clojure gives us the record. Haskell offers record syntax which is preferable over tuples to model more complex domain data. Resulting "instances" of records are immutable and their fields are public.

Functions are not part of records, although the Haskell compiler as well as the Clojure defrecord macro derive some related default implementations for formatting, parsing, ordering and comparing record values. In both languages we can nicely access pieces of these values and cheaply derive new values from existing ones. No one needs to implement constructors, getters, setters, toString, equals or hashCode.

Defining data structures

Here's an example of some records defined in Haskell:

module Accounting where
data Address = Address { street :: String
                       , city :: String
                       , zipcode :: String} 
             deriving (Show, Read, Eq, Ord)

data Person = Person { name :: String
                     , address :: Address}
            deriving (Show, Read, Eq, Ord)

data Account = Account { owner :: Person
                       , entries :: [Int]}
             deriving (Show, Read, Eq, Ord)

And here's what it looks like in (untyped) Clojure5:

(ns accounting)
(defrecord Address [street zipcode city])

(defrecord Person [name address])

(defrecord Account [owner entries])      

Let's see what we now have got:

  • Thread-safe data.
  • Extensional equality.
  • Types.
  • No subtyping.
  • Reasonable default implementations of basic functionality.
  • Complex data based on very few composable collection types.

The last item can hardly be overrated: since we use only few common collection types we can immediatly apply a vast set of data transformation functions like map, filter, reduce/foldl and friends. It needs some practice, but it'll take you to a whole new level of expressing data transformation logic.

But where should we implement individual domain logic?

Adding domain functionality

If we only want to implement functions that act on some type of data we can pass a value as (for example) first argument to those. To denote the relation between the record and the functions we implement these in the same Clojure namespace or Haskell module that the record definition lives in.

Let's add a Haskell function that calculates the balance for an account (please note that the Account entries are a list of integers):

balance :: Account -> Int
balance (Account _ es) = foldl (+) 0 es

Now the Clojure equivalent:

(defn balance
  [{es :entries}]
  (reduce + es))

If we now want to call these functions in an OO like "subject verb" manner we would take Clojure's thread-first macro ->. To get the balance of an account a we can use the expression (-> a balance). Or, given that p is a Person, the expression (-> p :address :city) allows us to retrieve the value of the city field.

In Haskell we need an additional function definition to introduce an operator that allows us to flip the order:

(-:) :: a -> (a -> b) -> b
x -: f = f x

Now we can calculate the balance for an account a using a-:balance, or access the city field with an expression p-:address-:city.

Please note, that the mechanisms allowing us to flip the function-argument-order are totally unrelated to records or any other kind of data structure.

Ok, that was easy. For implementing domain logic we now have

  • Ordinary functions taking the record as parameter.
  • A loose "belongs-to" relation between data and functions via namespace/module organization.
  • Independent "syntactic sugar" regarding how we can denote function application.
  • No hassle when we want to extend functionality related to the data without being able to alter the original namespace/module.

From an OO perspective this looks like static functions on data, something described by the term Anemic Domain Model. I can almost hear that inside any long-term OO practitioners head it starts to scream: "But this is wrong! You must combine data and functions, you must encapsulate this in an entity class!"

After doing this for more than 20 years, I just ask "Why? What's the benefit? And how is this idea actually applied in thousands of systems? Does it serve us well?" My answer today is a clear "No, it doesn't". Actually keeping these things separate is simpler and makes the pieces more composable. Admittedly, to make it significantly better than procedural programming we have to add a few language features you have been missing in OO for so long. To see this read on, now the fun begins:

Polymorphism

Both languages allow bundling of function signatures to connect an abstraction with an API. Clojure calls these "protocols", Haskell calls these "type classes", both share noticable similarities with Java interfaces. But in contrast to Java, we can declare that types participate in these abstractions, regardless of whether the protocol/type class existed before the record/type declaration or not.

To illustrate this with our neat example, we introduce an abstraction that promises to give us somehow the total sum of some numeric data. Let's start with the Clojure protocol:

(defprotocol Sum
  (totals [x]))

The corresponding type class in Haskell looks like that:

class Sum a where
  totals :: a -> Int

This lets us introduce polymorphic functions for types without touching the types, thus essentially solving the expression problem, because it let's us apply existing functions that only rely on protocols/type classes to new types.

In Clojure we can extend the protocol to an existing type like so:

(extend-protocol Sum
  Account
  (totals [acc] (balance acc)))

And in Haskell we make an instance:

instance Sum Account where
  totals = balance

It should be easy to spot the similarity. In both cases we implement the missing functionality based on the concrete, existing type. We're now able to apply totals on any Account instance because Account now supports the contract. What would you do for a similar effect in OO land?6

With this, the story of function implementation reuse becomes very different: protocols/type classes are a means to hide differences between concrete types. This allows us to create a large number of functions that rely solely on abstractions, not concrete types. Instead of each object carrying the methods of it's class (and superclasses) around, we have independent functions over small abstractions. Each type of record can decide where it participates in, no matter what existed first. This drastically reduces the overall number of function implementations and thus the size of the resulting system.

Perhaps you now have a first idea of the power of the approach to polymorphism in Clojure and Haskell, but wait, polymorphism is still only a special case of dispatching function invocations.

Other ways for dispatching

Let's first look from a conceptual perspective what dispatching is all about: you apply a function to some arguments and some mechanism decides which implementation is actually used. So the easiest way to build a runtime dispatcher in an imperative language is a switch statement. What's the problem with this approach? It's tedious to write and it conflates branching logic with the actual implementation of the case specific logic.

To come up with a better solution, Haskell and Clojure take very different approaches, but both excel what any OO programmer is commonly used to.

Haskell applies "Pattern Matching" to argument values, which essentially combines deconstruction of data structures, binding of symbols to values, and -- here comes the dispatch -- branching according to matched patterns in the data structure. It's already powerful and it can be extended by using "guards" which represent additional conditions. I won't go into the details here, but if you're interested I recommend this chapter from "Learn You a Haskell for Great Good".

Clojure "Multimethods" are completely different. Essentially they consist of two parts. One is a dispatch function, and the other part is a bunch of functions to be chosen from according to the dispatch value that the dispatch function returns for an invocation. The dispatch function can contain any calculation, and in addition it is possible to define hierarchies of dispatch values, similar to what can be done with type based multiple inheritance in OO languages like C++. Again very powerful, and if you're interested in the details here's an online excerpt of "Clojure in Action".

By now, I hope you are able to see that Clojure and Haskell open the door to a different but more powerful way to design systems. The interesting features of the OO class like dispatching and ease of implementation reuse through inheritance fade in the light of more general concepts of todays FP languages.

To complete the picture here's my discussion of some left-over minor features that OO classes provide, namely type parameters, encapsulation and intensional equality.

Type parameters

In case of Haskell, type parameters are available for types (used in so-called "type constructors") and type classes. If you know Java Generics then the term "type variable" is a good match for "type parameter". These can be combined with constraints to restrict the set of admissable types. So, with Haskell you don't loose anything.

Clojure is basically untyped, therefore the whole concept doesn't apply. This gives more freedom, but also more responsibility. However, Typed Clojure adds compile-time static type checking, if type annotations were provided. For functions, Typed Clojure offers type parameters. But, AFAIK, and by the time of this writing, Typed Clojure doesn't offer type parameters for records.

Encapsulation

The OO practitioner will have noticed that the very important OO idea of visibility of members seems to be completely missing in Clojure and Haskell. Which is true regarding data.

Restriction of visibility in OO can have two reasons:

  • Foreign code must not rely on a members value because it's not part of the API and it might be subject to change without warning.
  • Foreign code must not be allowed to change a members value when this could bring the object into an inconsistent state.

As available in todays mainstream OO languages, I don't see that the first reason is sufficiently covered by technical visibility, because the notion of "foreign" can not be adequately encoded. In fact, an idea like PublishedInterface makes only sense because visibility does not express our intent. Instead of a technical restriction, sheer documentation seems to be a better way to express how reliable a piece of a data structure is.

Regarding the second reason, immutable data and referential transparency is certainly helpful. A system where most of the functions don't rely on mutable state is less prone to failure caused by erroneous modification of some state. Of course, it is still possible to bring data into an inconsistent state and pass this to functions which signal an exception. But the same is possible for mutable objects, the only difference being that a setter signals an exception a bit earlier. Eventually validation must detect this at the earliest possible point in time to give feedback to the user or a external system. This is true regardless of the paradigm.

Regarding visibility of functions, Haskell as well as Clojure give us a means to make a subset of them available in other modules / namespaces and to protect all others from public usage. Due to it's type system Haskell can additionally protect data structures from being used in foreign modules.

Intensional Equality

In a language that consistently uses value semantics there is only extensional equality and the test for equality works everywhere with = or ==. If we need identity in our data then we can resort to the same mechanism that we use for records in a relational database. Either define an id function that makes use of some of the fields or add a surrogate key. Anyway, in a system with minimal mutable state the appetite for object identity diminishes rapidly.

Are there any downsides?

Of course, apart from being different and requiring us to learn some stuff, programming without objects -- especially without mutable state -- sometimes calls for ideas and concepts that we would never see in OOP7. For example, there is the concept of a zipper to manipulate tree-like data which is dispensable in OO because we can just hold onto a node in a tree in order to modify it efficiently in place. Or, you can't create cyclic dependencies in data structures. In my experience, you seldom encounter these restrictions, and either detect that your idea was crap anyway, or that there is an unexpected but elegant solution available.

A system without side-effects is a useless thing. So real-life functional languages offer ways to enable them. By being either forced by the compiler to wrap side-effects (as in Haskell) or by being punished by obstrusive API calls (as in Clojure) we put side-effects at specific places and handle them with great care. This affects the structure of the system. But it is a good thing, because having referential transparency in large parts of a code base is a breeze when it comes to testing, parallelization or simply understanding what code does.

You may argue that imperative programming leads to more efficient programs compared to functional programming. You're right. Manual memory management is also likely to be more efficient than relying on a garbage collector. If done correctly. Is there anyone on the JVM missing manual memory management? No? So, there is certainly a price. But modern multi-core hardware as available in enterprise computing is not as expensive as precious programmer time.

Conclusion

This was really a long journey. I started by breaking down and analyzing the features that the good old OO class offers. I continued with showing how you can do the same and more using a modern FP language. I know, it looks strange and I promise it feels strange... but only in the beginning. In the end you're much better off learning how to use the power of a modern language like Clojure or Haskell.


1. Here, I refer to OO concepts as available today in the mainstream. Initially, objects were planned to be more like autonomous cells that exchange messages, very similar to what the Actor model provides. So in a sense, what we think of today as OOP is not what its inventors thought it should be.

2. Which is of course only true, if OOP languages are all you know. Read about the Blub Paradox to get an idea why it is so hard to pick up totally different approaches to programming.

3. Regarding equality of data the term "ValueObject" is IMO an outright misnomer. A Value supports extensional equality whereas an Object has identity and therefore supports intensional equality. The term "Value" alone would have been better.

4. This "subject verb" order isn't appropriate in every situation, as Steve Yegge enjoyably points out in his famous blog post Execution in the Kingdom of Nouns.

5. If you are concerned about missing type annotations in Clojure records: There exists an optional type system called Typed Clojure that allows to add compile-time static type checking. Alternatively you can use libraries like Domaintypes or Prismatic Schema to add type-like notation and get corresponding runtime validation.

6. What can we do in OO to add methods contained in a new interface to an existing type without touching the types implementation? Use the Adapter pattern, if you're lucky to influence how instances are created, and if you're able to retain a reasonable equality relation between wrapped and unwrapped instances. Good luck!

7. No, I don't refer to Monads. Java 8 introduced Optional which is only a flavor of the Haskell Maybe Monad. So there is evidence that the idea is useful in OOP.

by Falko Riemenschneider (falko.riemenschneider@arcor.de) at September 17, 2014 12:00 AM

HN Daily

September 16, 2014

StackOverflow

RPC for Python 3 and Node.js?

I'm looking for a library that allows me to call Python 3 code from a Node.js server.

This question lead me to ZeroRPC, but it's not compatible with Python 3.

Does anyone know of such a library? Or should I try to implement my own (and in that case, with which underlying transport?)

by Will at September 16, 2014 11:54 PM

/r/clojure

Overcoming Bias

Great Filter TEDx

This Saturday I’ll speak on the great filter at TEDx Limassol in Cyprus. Though I first wrote about the subject in 1996, this is actually the first time I’ve been invited to speak on it. It only took 19 years. I’ll post links here to slides and video when available.

by Robin Hanson at September 16, 2014 11:35 PM

StackOverflow

Ansible-galaxy throws ImportError: No module named yaml


When I try to install ansible role, I see this exception.

 $ ansible-galaxy install zzet.postgresql
 Traceback (most recent call last):
 File "/Users/myHomeDir/.homebrew/Cellar/ansible/1.4.3/libexec/bin/ansible-galaxy", line 34, in <module>
 import yaml
 ImportError: No module named yaml

OS: Mac Os Maverick
Ansible: 1.4.3

Does anyone know how to fix it?

by Alexander Vagin at September 16, 2014 11:32 PM

/r/emacs

StackOverflow

Dynamic query parameters in Slick (sorting)

I'm trying to convert anorm queries to slick in one of Play 2.3 samples, but I'm not sure how to implement dynamic sorting.

This is the original method:

def list(page: Int = 0, pageSize: Int = 10, orderBy: Int = 1, filter: String = "%"): Page[(Computer, Option[Company])] = {

    val offest = pageSize * page

    DB.withConnection { implicit connection =>

        val computers = SQL(
        """
          select * from computer 
          left join company on computer.company_id = company.id
          where computer.name like {filter}
          order by {orderBy} nulls last
          limit {pageSize} offset {offset}
        """
        ).on(
            'pageSize -> pageSize,
            'offset -> offest,
            'filter -> filter,
            'orderBy -> orderBy
        ).as(Computer.withCompany *)

        val totalRows = SQL(
        """
          select count(*) from computer 
          left join company on computer.company_id = company.id
          where computer.name like {filter}
        """
        ).on(
            'filter -> filter
        ).as(scalar[Long].single)

        Page(computers, page, offest, totalRows)

    }

}

So far I've got this far with the first query:

val computers_ = (for {
    (computer, company) <- Computer.where(_.name like filter) leftJoin
        Company on (_.companyId === _.id)
} yield (computer, company.?)).list

How do I do the "order by" part in slick, bearing in mind it's a column name passed to the method dynamically as a parameter?

Scala 2.10.4 / Play 2.3 / Slick 2.0.2

Table classes generated by Slick code generator below:

package tables
// AUTO-GENERATED Slick data model
/** Stand-alone Slick data model for immediate use */
object Tables extends {
  val profile = scala.slick.driver.H2Driver
} with Tables

/** Slick data model trait for extension, choice of backend or usage in the cake pattern. (Make sure to initialize this late.) */
trait Tables {
  val profile: scala.slick.driver.JdbcProfile
  import profile.simple._
  import scala.slick.model.ForeignKeyAction
  // NOTE: GetResult mappers for plain SQL are only generated for tables where Slick knows how to map the types of all columns.
  import scala.slick.jdbc.{GetResult => GR}

  /** DDL for all tables. Call .create to execute. */
  lazy val ddl = Company.ddl ++ Computer.ddl

  /** Entity class storing rows of table Company
   *  @param id Database column ID PrimaryKey
   *  @param name Database column NAME  */
  case class CompanyRow(id: Long, name: String)
  /** GetResult implicit for fetching CompanyRow objects using plain SQL queries */
  implicit def GetResultCompanyRow(implicit e0: GR[Long], e1: GR[String]): GR[CompanyRow] = GR{
    prs => import prs._
    CompanyRow.tupled((<<[Long], <<[String]))
  }
  /** Table description of table COMPANY. Objects of this class serve as prototypes for rows in queries. */
  class Company(tag: Tag) extends Table[CompanyRow](tag, "COMPANY") {
    def * = (id, name) <> (CompanyRow.tupled, CompanyRow.unapply)
    /** Maps whole row to an option. Useful for outer joins. */
    def ? = (id.?, name.?).shaped.<>({r=>import r._; _1.map(_=> CompanyRow.tupled((_1.get, _2.get)))}, (_:Any) =>  throw new Exception("Inserting into ? projection not supported."))

    /** Database column ID PrimaryKey */
    val id: Column[Long] = column[Long]("ID", O.PrimaryKey)
    /** Database column NAME  */
    val name: Column[String] = column[String]("NAME")
  }
  /** Collection-like TableQuery object for table Company */
  lazy val Company = new TableQuery(tag => new Company(tag))

  /** Entity class storing rows of table Computer
   *  @param id Database column ID PrimaryKey
   *  @param name Database column NAME 
   *  @param introduced Database column INTRODUCED 
   *  @param discontinued Database column DISCONTINUED 
   *  @param companyId Database column COMPANY_ID  */
  case class ComputerRow(id: Long, name: String, introduced: Option[java.sql.Timestamp], discontinued: Option[java.sql.Timestamp], companyId: Option[Long])
  /** GetResult implicit for fetching ComputerRow objects using plain SQL queries */
  implicit def GetResultComputerRow(implicit e0: GR[Long], e1: GR[String], e2: GR[Option[java.sql.Timestamp]], e3: GR[Option[Long]]): GR[ComputerRow] = GR{
    prs => import prs._
    ComputerRow.tupled((<<[Long], <<[String], <<?[java.sql.Timestamp], <<?[java.sql.Timestamp], <<?[Long]))
  }
  /** Table description of table COMPUTER. Objects of this class serve as prototypes for rows in queries. */
  class Computer(tag: Tag) extends Table[ComputerRow](tag, "COMPUTER") {
    def * = (id, name, introduced, discontinued, companyId) <> (ComputerRow.tupled, ComputerRow.unapply)
    /** Maps whole row to an option. Useful for outer joins. */
    def ? = (id.?, name.?, introduced, discontinued, companyId).shaped.<>({r=>import r._; _1.map(_=> ComputerRow.tupled((_1.get, _2.get, _3, _4, _5)))}, (_:Any) =>  throw new Exception("Inserting into ? projection not supported."))

    /** Database column ID PrimaryKey */
    val id: Column[Long] = column[Long]("ID", O.PrimaryKey)
    /** Database column NAME  */
    val name: Column[String] = column[String]("NAME")
    /** Database column INTRODUCED  */
    val introduced: Column[Option[java.sql.Timestamp]] = column[Option[java.sql.Timestamp]]("INTRODUCED")
    /** Database column DISCONTINUED  */
    val discontinued: Column[Option[java.sql.Timestamp]] = column[Option[java.sql.Timestamp]]("DISCONTINUED")
    /** Database column COMPANY_ID  */
    val companyId: Column[Option[Long]] = column[Option[Long]]("COMPANY_ID")

    /** Foreign key referencing Company (database name FK_COMPUTER_COMPANY_1) */
    lazy val companyFk = foreignKey("FK_COMPUTER_COMPANY_1", companyId, Company)(r => r.id, onUpdate=ForeignKeyAction.Restrict, onDelete=ForeignKeyAction.Restrict)
  }
  /** Collection-like TableQuery object for table Computer */
  lazy val Computer = new TableQuery(tag => new Computer(tag))
}

UPDATE - SOLUTION The final solution is in this question.

by Caballero at September 16, 2014 11:12 PM

Ways to avoid Exception in thread "main" clojure.lang.ArityException?

When a user supplies no command line arguments, I want Hello World to print usage information instead of an error trace.

":";exec clj -m `basename $0 .clj` ${1+"$@"}
":";exit

(ns hello
    (:gen-class))

(defn -main
    [greetee]
    (println (str "Hello " greetee "!")))

$ ./hello.clj Fred
Hello Fred!
$ ./hello.clj 
Exception in thread "main" clojure.lang.ArityException: Wrong number of args (0) passed to: hello$-main
    at clojure.lang.AFn.throwArity(AFn.java:439)
    at clojure.lang.AFn.invoke(AFn.java:35)
    at clojure.lang.Var.invoke(Var.java:397)
    at clojure.lang.AFn.applyToHelper(AFn.java:159)
    at clojure.lang.Var.applyTo(Var.java:518)
    at clojure.core$apply.invoke(core.clj:600)
    at clojure.main$main_opt.invoke(main.clj:323)
    at clojure.lang.FnLoaderThunk.invoke(FnLoaderThunk.java:36)
    at clojure.main$main.doInvoke(main.clj:426)
    at clojure.lang.RestFn.invoke(RestFn.java:422)
    at clojure.lang.FnLoaderThunk.invoke(FnLoaderThunk.java:36)
    at clojure.lang.Var.invoke(Var.java:405)
    at clojure.lang.AFn.applyToHelper(AFn.java:165)
    at clojure.lang.Var.applyTo(Var.java:518)
    at clojure.main.main(main.java:37)

by mcandre at September 16, 2014 10:53 PM

QuantOverflow

Why should we expect geometric Brownian motion to model asset prices?

Disclaimer: I am a complete ignoramus about finance, so this may be an inappropriate forum for me to ask a question in.

I am a mathematician who knows nothing about finance. I heard from a popular source that a something called the Black-Scholes equation is used to model the prices of options. Out of curiosity, I turned to Wikipedia to learn about the model. I was shocked to learn that it assumes that the log of the price of an asset follows a Brownian motion with drift (and then the asset price itself is said to follow a "geometric" Brownian motion). Why, I wondered, should that be a good model? I can understand that asset prices have to be unpredictable or else smart traders would be able to beat the market by predicting them, but there would seem to be many unpredictable alternatives to geometric Brownian motion.

I have found one source that addresses my question, the following book chapter: http://www.probabilityandfinance.com/chapters/chap9.pdf and an argument it alludes to in chapter 11 of the same book. The analysis here looks very interesting, and I am curious if it is generally accepted in the finance community. I have not studied it enough to understand how realistic its assumptions are, however. It apparently depends on a "continuous time" assumption that seems like it might not be very realistic given that real markets move in response to discrete news events such as earnings announcements.

by Mary S at September 16, 2014 10:46 PM

StackOverflow

best practice to use scala immutable Queue

I would like to know how to use Queue in the best functional way. For example, I would like to dequeue elements and print them with a recursive function. And I would like the most beautiful function.

For example, this is a function doing what I want. But I dislike the if.

Is their a better way to use Queue ?

import scala.collection.immutable.Queue

def printQ[A](p:Queue[A]) {
  if(!p.isEmpty) {
    p.dequeue match { 
      case (x,xs) => 
        println(x.toString) 
        printQ(xs) 
      case _ => 
        println("End")    
    }
  }    
}

printQ(Queue(1,2,4,5))

Thanks for responses.

by fsart at September 16, 2014 10:35 PM

QuantOverflow

multiperiod optimization using R

I'm interested in multistage optimization problems. Are there any good R packages around to solve such problems over time? I'm not at all an expert in it, so maybe someone knows a good paper / lecture notes to start with? I know classical optimization (linear optimization, convex optimiziation etc) but I've never had to deal with optimization over time. Any reference, theoretical and regarding the implementation are very welcome. I know that this is a very general question, but this is due to my (not yet) attained knowledge. If further clarification is needed I'm happy to share thix. Many thanks in advance

EDIT

Let's take for example the following paper, there we have a optimization problem of the form:

$$\max \sum_{i=1}^{n+1}r^L_ix_i^L$$

such that

$$ x^l_i=r^{l-1}_i x_i^{l-1}-y_i^l+z^l_i,\hspace{2pt} i=1,\dots n,l=1,\dots,L$$ $$ x^l_i=r^{l-1}_{n+1} x_{n+1}^{l-1}+\sum_{i=1}^n(1-\mu^l_i)y_i^l-\sum_{i=1}^n(1+\nu_i^l)z^l_i$$ $$y^l_i\ge 0,\hspace{2pt} i=1,\dots n,l=1,\dots,L$$ $$x^l_i\ge 0,\hspace{2pt} i=1,\dots n,l=1,\dots,L$$ $$z^l_i\ge 0,\hspace{2pt} i=1,\dots n,l=1,\dots,L$$ where some $x_i^l$ is the value (in dollar) of an asset $i$ at time $l$, $r_i^l$ is the asset return, $y^l_i$ and $z^l_i$ are the amount of asset sold and bought. $\mu^l_i $ and $\nu_i^l$ have also economical interpretation, but are not that important for the question. Assuming everthing is deterministic, we can solve this problem using interior points / simplex method since it is an "simple" LP. However the theory I'm looking for should give me ideas if it is optimal to solve at every time $l$ the subproblem (maximize $\sum_{i=1}^{n+1}r^l_ix^l_i$ under the corresponding constraints or is this not a good idea. I have heard / read that one could solve such kind of problem using stochastic programming, but still I'm interested in knowing how to subdivide (if possible) such kind of problems.

by user8 at September 16, 2014 10:19 PM

CompsciOverflow

Scheduling Algorithm for aperiodic multiprocessor tasks

I've got a System, which is defined as follows:

  • A limited number of stations is given (n), which can offer different services (for now there are two services). A station can either offer one of the services or both (maybe it will be extended one day, then it should work with x different services and a station can offer 1..x of these services).
  • The throughput for a service can differ from station to station.
  • Each service in the system can have a fixed priority assigned by the developer. If multiple services have to be scheduled and the scheduler isn't able to assign each service at once to a single station, it can split up the services and chose the service with the highest priority to run first and run the other service/s later on. (also see next point)
  • If a task requests multiple services, it is okay to split them up. For example, there are two services requested (a and b): requested service a is running on station c for 10 minutes, afterwords it is moving to station d to receive service b. However, you could also think of this as two separate tasks, each of which requests a single service, if you prefer.
  • A task can arrive at any time, for example the incoming of a new request.
  • Each task has an arrival time, an execution time (computed by the requested amount of the service and the amount a station can offer. in other words: this is the time, a particular station will need to complete a task) and a deadline
  • Different tasks have different execution times.
  • Arrival time: there are "reservations", then the arrival time is known before the task arrives. but there are other cases, in which the arrival time is not known, so the arrival time equals "now".
  • A station can serve one task at a time, but a task can request multiple services at once (there are stations which offer these services at the same time, the different services doesn't influence each other).
  • Tasks can be preempted and moved between stations during execution, meaning that if one task is running on a station the task can be assigned to a different station which offers the same service (this station-change can cause costs, based on the distance between different stations). This could be caused by another task which needs the station the first task was running on until now.
  • The algorithm has to work in real time.

The System is given and works already with one service, I have to adapt it to work with multiple services. For this, I have to look for and implement a new scheduling algorithm, but I have no idea for what I have to look for. Another possibility is to design my own algorithm, but due to the lack of knowledge in this sector I think it would be a bad idea.

Any hints for what kind of scheduling algorithms I have to look for?

by auwieha at September 16, 2014 10:14 PM

StackOverflow

Can you add parameters to Actions?

Say I have a Action and I want it to optionally force https.

How can I add a parameter to a custom action?

import play.api.mvc._

def onlyHttps[A](action: Action[A]) = Action.async(action.parser) { request =>
  request.headers.get("X-Forwarded-Proto").collect {
    case "https" => action(request)
  } getOrElse {
    Future.successful(Forbidden("Only HTTPS requests allowed"))
  }
}

So in my controller:

def index = onlyHttps(false) {
  // ..
}

Another area is I want to check if the currently logged in user has a certain level of permission, so I want to pass the permission type(s) as a parameter to my custom actions.

by public static at September 16, 2014 10:02 PM

Halfbakery

Lobsters

StackOverflow

Shapeless Generic and case class toArray exception depending on field types

I encounter the following puzzling behaviour (Scala 2.10.4 Shapeless 2.0)

import shapeless._
import poly._

object TypeMapper extends Poly1 {
  implicit def caseInt     = at[Int](identity)
  implicit def caseLong    = at[Long](identity)
  implicit def caseString  = at[String](identity)
}


case class L2(x: String, y: Long)
val l2Gen = Generic[L2]
val l2 = l2Gen.to(L2("A",1L)).map(TypeMapper).toArray

case class LL3( c: Long, a: String, b: String)
val ll3Gen = Generic[LL3]
val ll3 = ll3Gen.to(LL3(1L,"A","B")).map(TypeMapper).toArray

Both of these work as hoped but when I have a case class with the following field layout

case class SL3(a: String, b: String, c: Long)
val sl3Gen = Generic[SL3]
val sl3 = sl3Gen.to(SL3("A","B",1L)).map(TypeMapper).toArray

The code blows up with

java.lang.ArrayIndexOutOfBoundsException: 1
    at shapeless.ops.hlist$LowPriorityToArray$$anon$103.loop$2(hlists.scala:595)
    at shapeless.ops.hlist$LowPriorityToArray$$anon$103.apply(hlists.scala:598)
    at shapeless.ops.hlist$LowPriorityToArray$$anon$103.apply(hlists.scala:589)
    at shapeless.syntax.HListOps.toArray(hlists.scala:439)
    at .<init>(<console>:17)
    at .<clinit>(<console>)
    at .<init>(<console>:7)
    at .<clinit>(<console>)
    at $print(<console>)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:734)
    at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:983)
    at scala.tools.nsc.interpreter.IMain.loadAndRunReq$1(IMain.scala:573)
    at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:604)
    at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:568)
    at scala.tools.nsc.interpreter.ILoop.reallyInterpret$1(ILoop.scala:760)
    at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:805)
    at scala.tools.nsc.interpreter.ILoop.command(ILoop.scala:717)
    at scala.tools.nsc.interpreter.ILoop.processLine$1(ILoop.scala:581)
    at scala.tools.nsc.interpreter.ILoop.innerLoop$1(ILoop.scala:588)
    at scala.tools.nsc.interpreter.ILoop.loop(ILoop.scala:591)
    at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.scala:882)
    at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:837)
    at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:837)
    at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
    at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:837)
    at scala.tools.nsc.interpreter.ILoop.main(ILoop.scala:904)
    at xsbt.ConsoleInterface.run(ConsoleInterface.scala:69)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at sbt.compiler.AnalyzingCompiler.call(AnalyzingCompiler.scala:102)
    at sbt.compiler.AnalyzingCompiler.console(AnalyzingCompiler.scala:77)
    at sbt.Console.sbt$Console$$console0$1(Console.scala:23)
    at sbt.Console$$anonfun$apply$2$$anonfun$apply$1.apply$mcV$sp(Console.scala:24)
    at sbt.Console$$anonfun$apply$2$$anonfun$apply$1.apply(Console.scala:24)
    at sbt.Console$$anonfun$apply$2$$anonfun$apply$1.apply(Console.scala:24)
    at sbt.Logger$$anon$4.apply(Logger.scala:90)
    at sbt.TrapExit$App.run(TrapExit.scala:244)
    at java.lang.Thread.run(Thread.java:744)

Can anyone shed any light on why this is happening and (hopefully) how to avoid it

by Gavin at September 16, 2014 09:47 PM

Clojure - Docjure: Method works in REPL but not in File

I just try to read the content of an excel file in clojure. I use the docjure library. When I use the sample code in the REPL, the output is as I wanted it. But after inserting it into the file I got an Wrong number of args - Error for the spreadsheet/select-sheet method.

Here is the code:

(use 'dk.ative.docjure.spreadsheet)

(->> (load-workbook (str (System/getProperty "user.dir") "/resources/public/xls/test.xls")
                    (select-sheet "menu")
                    (select-columns {:A :number, :D :name})
                    ))

The args for this method are [name ^Workbook workbook]. Why does it only need one argument in the REPL but two in the file?

by DanielderGrosse at September 16, 2014 09:46 PM

Lobsters

StackOverflow

Using eclipse editor shortcuts in scala worksheet in Scala IDE

How do I enable the typical editing keyboard shortcuts for .scala files in Scala worksheets as well?

For e.g., I use Cmd-/ to comment code in my .scala files. However, this shortcut does not work in a Scala worksheet

by RAbraham at September 16, 2014 09:37 PM

/r/emacs

Windows Emacs?

What is the best windows Emacs package? Should I just go with cygwin? Or is there something decent that runs without all the cygwin stuff?

submitted by kingpatzer
[link] [16 comments]

September 16, 2014 09:33 PM

StackOverflow

JSON Reads Writes for List[(Foo, List[FoodChildren]) ]

As question says

How to best write Writes [List[(Foo, List[FoodChildren]) ] ] where each Foo and FoodChildren itself are case class ?

I am on Scala 2.11, play framework 2.3.1

by user2066049 at September 16, 2014 09:27 PM

CompsciOverflow

Gateway 552GE Drivers [on hold]

I recently acquired a Gateway 552GE. It's an older PC from the XP era, but I am adding onto it to make it modern. I have added new hard drives and ram, but I ran into a snag...upon re-installing the OS some of the hardware drivers did not install. Does anyone know how I can find these drivers? Gateway has stopped support for this model, and I have heard of programs that will scan your computer for missing hardware drivers, but I do not know which programs are trustworthy. Any tips are appreciated. Thanks.

by Elser Pilot Car at September 16, 2014 09:25 PM

StackOverflow

Slick left/right/outer joins with Option

In the Slick examples there are a few examples of joining where one of the resulting columns can be nulls, as it can be the case when doing left, right, or outer joins. For example:

val explicitLeftOuterJoin = for {
  (c, s) <- Coffees leftJoin Suppliers on (_.supID === _.id)
} yield (c.name, s.name.?)

But what if I want to return the entire mapped object? What I mean is:

val explicitLeftOuterJoin = for {
  (c, s) <- Coffees leftJoin Suppliers on (_.supID === _.id)
} yield (c, s.?)

This doesn't seem to work as it complains about "could not find implicit value for evidence parameter of type scala.slick.lifted.TypeMapper[Suppliers]". Basically I'd like it to return a list of tuple of (Coffee, Option[Supplier])

Why doesn't this work and what's the fix for it? Especially, since this works fine:

val q = for {
  c <- Coffees
  s <- Suppliers
} yield (c, s)

(I know that's an inner join)

by siki at September 16, 2014 09:11 PM

Lobsters

Portland Pattern Repository

DataTau

StackOverflow

Scala Remote Actor Example - Eclipse

I used the example code given in this link for implementing the scala remote application: http://stackoverflow.com/a/15367735/1932985

I get the following output:
Server Output:

akka://GreetingSystem/user/joe
Server ready
joe received local msg! from Actor[akka://GreetingSystem/deadLetters]

Client Output:

STARTING
That 's Joe:ActorSelection[Anchor(akka://GreetingSystem-1/deadLetters), Path(/user/joe)]
Client has sent Hello to joe
[INFO] [09/16/2014 16:39:49.167] [GreetingSystem-1-akka.actor.default-dispatcher-5]   [akka://GreetingSystem-1/deadLetters] Message [java.lang.String] from Actor[akka://GreetingSystem-1/deadLetters] to Actor[akka://GreetingSystem-1/deadLetters] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[INFO] [09/16/2014 16:39:49.168] [GreetingSystem-1-akka.actor.default-dispatcher-5] [akka://GreetingSystem-1/deadLetters] Message [java.lang.String] from Actor[akka://GreetingSystem-1/user/$a#-555317575] to Actor[akka://GreetingSystem-1/deadLetters] was not delivered. [2] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.

I ran the server file first followed by the client file. Is the output I received correct? Or is the way I executed wrong? Please help me out!

Thanks, Keshav

by keshr3106 at September 16, 2014 08:56 PM

How to remove a blocking recv without a zmq_term tha kills the central zmq_context?

I have several threads attached to the same zmq_context on a thread.

The recommended way to close is to use zmq_term which will wake up the thread but that will close all sockets.

I have different sockets on different threads and wish to share the same context (to reduce zmq thread count).

There is a recv which blocks and I would like to be able to shut it down immediately without affecting all the other sockets.

Is this even possible with zmq?

by user1335325 at September 16, 2014 08:51 PM

/r/compsci

Tight Bound on Best and Worst Case

What does it mean to find a upper bound (big-Omega) on the best case? And an lower bound (big on the worst case? There seems to be many resources giving a technical explanation without really explaining the intuition behind it.

submitted by ghostofdilla
[link] [3 comments]

September 16, 2014 08:39 PM

StackOverflow

Embed play in a standard Java web-app?

We have a webapp using standard JSP/servlets and packaged as a war file. This makes the app really portable. We can push it to any cloud supporting Java (such as AWS beanstalk). I believe Play 2.0 does not provide standalone war files.

However, I want to use some of Play's features such as web-socket servers without using Play. So I was wondering if I can simply import a "play.jar" type file in my standard J2EE web-app and use it in "embedded mode" to run a web-socket server without having to fully run Play.

by Jus12 at September 16, 2014 08:33 PM

/r/freebsd

arc4random API is still RC4?

I was just looking at the FreeBSD source code and noticed that the arc4random API is apparently still implemented using RC4, which is probably not the best choice in this post-Snowden world. There have been apparently been two patches (1, 2) to change it to the way safer ChaCha20 like in OpenBSD, but apparently neither has been applied so far.

Does anybody here know if there has been any further work on getting this changed? It would be unfortunate if anything important relied on an API that uses RC4 as a source of cryptographically secure random numbers.

submitted by FlailingBorg
[link] [5 comments]

September 16, 2014 08:22 PM

Lobsters

StackOverflow

What's the right way to include a browser REPL on a page, but only in development?

I'm using Austin to set up a browser-connected REPL, and following the example of its sample project, which uses Enlive to add the REPL script to the page.

Now I'd like to deploy my app, but I don't wan't Austin or my REPL to be on the page in production. What's the intended way to use the REPL only in development?

Is there a way to use Enlive as a middleware I could use in development and not in production?

by Peeja at September 16, 2014 08:03 PM

Fefe

Nachdem der Testpilot für Zwangsarbeit unter dem Namen ...

Nachdem der Testpilot für Zwangsarbeit unter dem Namen "Hartz IV" erfolgreich verlief, legt Hamburg jetzt auch noch das Feigenblättchen "Mehraufwandsentschädigung" ab. Ja warum sollte das auch jemals besser werden in diesem Lande, solange wir die mit ihrer Salamitaktik durchkommen lassen.

September 16, 2014 08:02 PM

StackOverflow

Why in zmq, I am using push and publish with two different sockets. I can push data to server but server is unable to publish back

#include "string.h"
#include "assert.h"
#include "zhelpers.h"
#include "zmq.h"

int main (void)
{

   void *context = zmq_ctx_new ();
   void *pushh   = zmq_socket  ( context, ZMQ_PUSH );          \*push socket*\    
   int   rc      = zmq_connect ( pushh, "tcp://localhost:5555" );
   assert (rc == 0);

   void *subscriber = zmq_socket  ( context, ZMQ_SUB );
   rc =               zmq_connect ( subscriber, "tcp://localhost:5557" );

   //                                          |0123456<NULL>|
   zmq_setsockopt ( subscriber, ZMQ_SUBSCRIBE, "10001 ", 6 );
   assert (rc == 0);

   char buffer[10];

   printf ("Enter. \n");
   scanf("%s", &buffer);

   zmq_send (pushh, buffer, 9, 0);    \\ send message by pushing to server

   sleep(1);
   zmq_recv (subscriber,buffer,10,0);

   printf ("Received= %s,.\n",buffer);    \*ready to be received message as subscriber`s`*\

   zmq_close (pushh);    
   zmq_ctx_destroy (context);

   return 0;

}

by monsterrrrr at September 16, 2014 07:57 PM

Git workflow - changing branch and slow recompiles

I work on a large Scala project where we use Git for version control. My workflow is to work on new features in my own branch and switch when needed. Various releases of the code are in their own branches. All very standard.

If I have to fix a bug in a certain version of the code, I'll switch to the correct branch, fix the bug, commit then switch back to where I was.

The problem is that although git is instant at switching to another branch once I'm there I have to recompile the code. Which takes several minutes. Then fix the bug, switch back to my own branch and do another recompile, which takes another few minutes. It seems to defeat the purpose of Git being so fast.

Has anyone else encountered this? Are there ways around it. I'm sure it's not a Scala specific problem (although Scala is epically slow at compiling).

update 3+ years later

I've been using @djs answer (git-new-workdir) for the last few years. It's worked very well for me. I have a master directory and several other directories (like production, next-release etc) that I switch to when I need to do work there. There's very little overhead and it means you can quickly switch to say, production, to test something, then switch back to what you were working on.

by Dave at September 16, 2014 07:55 PM

/r/compsci

Flair for experts (& maybe weekly AMA threads)

A good suggestion arrived in the moderation inbox:

A few months ago /r/science tried a new user flair system where they asked people who were grad-students, research staff, and industry experts to send the mods proof of their qualifications in return for user flair which showed what each verified members respective field of expertise was. For example, my flair reads Grad Student | Computer Science.

This idea went down quite well and helped people who weren't experts in the specific field of the post reply to the comments of people who actually had a background in the area of interest.

...

Naturally we would need more specific flair to denote the area of Computer Science. Some example user flairs might be

  • Grad Student | Theory
  • Research Assistant | Computer Graphics
  • Professor | HCI
  • Senior Lecturer | Natural Language Processing

I like it! If you have significant expertise in some area of Computer Science and want the subreddit to know about it, post some evidence in this thread. You can link to a published paper, thesis, technical blog, github repo, or anything else that establishes you as someone who deeply understands some topic. Let me know what you want your flair to say.

Also, I'd like to start having weekly threads like "Ask the Cryptographers". Does that sound like a good idea? Does anyone want to be the first expert guinea pig?

submitted by cypherx
[link] [32 comments]

September 16, 2014 07:47 PM

/r/clojure

StackOverflow

How to convert a String-represented ByteBuffer into a byte array in Java

I'm new to Java and I'm no sure how to do the following:

A Scala application somewhere converts a String into bytes:

ByteBuffer.wrap(str.getBytes)

I collect this byte array as a Java String, and I wish to do the inverse of what the Scala code above did, hence get the original String (object str above).

Getting the ByteBuffer as a String to begin with is the only option I have, as I'm reading it from an AWS Kinesis stream (or is it?). The Scala code shouldn't change either.

Example string:

String str = "AAAAAAAAAAGZ7dFR0XmV23BRuufU+eCekJe6TGGUBBu5WSLIse4ERy9............";

How can this be achieved in Java?

EDIT

Okay, so I'll try to elaborate a little more about the process:

  1. A 3rd party Scala application produces CSV rows which I need to consume
  2. Before storing those rows in an AWS Kinesis stream, the application does the following to each row:

    ByteBuffer.wrap(output.getBytes);
    
  3. I read the data from the stream as a string, and the string could look like the following one:

    String str = "AAAAAAAAAAGZ7dFR0XmV23BRuufU+eCekJe6TGGUBBu5WSLIse4ERy9............";
    
  4. I need to restore the contents of the string above into its original, readable, form;

I hope I've made it clearer now, sorry for puzzling you all to begin with.

by YuvalHerziger at September 16, 2014 07:43 PM

/r/compsci

/r/clojure

StackOverflow

FreeBSD - reinstalling APR from ports, asks for file to patch?

Running a home webserver in freebsd 9.1 (upgraded to 10), and i'm still doing some post upgrade commands, but i cant move anymore until i solve this, So, when upgrading/installing /devel/APR1 wether via ports pkg or portmaster i always get the same message requesting me to type in the file to patch.. What file must i patch?!?

root@node:/usr/ports/devel/apr1 # make reinstall clean
/!\ WARNING /!\
DEFAULT_MYSQL_VER is defined, consider using DEFAULT_VERSIONS=mysql=5.5 instead

===>  Found saved configuration for apr-1.5.1.1.5.3_4
===>   apr-1.5.1.1.5.3_4 depends on file: /usr/local/sbin/pkg - found
===> Fetching all distfiles required by apr-1.5.1.1.5.3_4 for building
===>  Extracting for apr-1.5.1.1.5.3_4
=> SHA256 Checksum OK for apr-1.5.1.tar.gz.
=> SHA256 Checksum OK for apr-util-1.5.3.tar.gz.
===>  Patching for apr-1.5.1.1.5.3_4
===>  Applying FreeBSD patches for apr-1.5.1.1.5.3_4
File to patch: 

and if i skip by just doing enter, asks if you want to skip patch, i say yes and then

No file found--skip this patch? [n] y
1 out of 1 hunks ignored--saving rejects to Oops.rej
=> Patch patch-apr_hints.m4 failed to apply cleanly.
=> Patch(es) patch-apr-util__dbd__apr_dbd_freetds.c patch-apr__configure applied cleanly.
*** Error code 1

Stop.
make[1]: stopped in /usr/ports/devel/apr1
*** Error code 1

Stop.
make: stopped in /usr/ports/devel/apr1

i'm tired of searching for this but haven't found nothing, so if you can help, it would be great, otherwise i have no clue what to do besides reinstall :( Thank you.

by Paul C. at September 16, 2014 07:08 PM

Lobsters

Fefe

Portland Pattern Repository

AWS

Amazon AppStream Now Supports Chrome Browser and Chromebooks

As you might know from reading my earlier posts (Amazon AppStream - Deliver Streaming Applications from the Cloud and Amazon AppStream - Now Available to All Developers), Amazon AppStream gives you the power to build complex applications that run from simple devices, unconstrained by the compute power, storage, or graphical rendering capabilities of the device. As an example of what AppStream can do, read about the Eve Online Character Creator (pictured at right).

Today we are extending AppStream with support for desktop Chrome browsers (Windows and Mac OS X) and Chromebooks. Developers of CAD, 3D modeling, medical imaging, and other types of applications can now build compelling, graphically-intense applications that run on an even wider variety of desktops (Linux, Mac OS X, and Microsoft Windows) and mobile devices ( Fire OS, Chromebooks, Android, and iOS). Even better, AppStream's cloud-based application hosting model obviates the need for large downloads, complex installation processes and sophisticated graphical hardware on the client side. Developers can take advantage of GPU-powered rendering in the cloud and use other AWS services to host their application's backend in a cost-effective yet fully scalable fashion.

Getting Started With Appstream
The AppStream Chrome SDK (available via the AppStream Downloads page) contains the documentation and tools that you need to have in order to build AppStream-compatible applications. It also includes the AppStream Chrome Application. You can use it as-is to view and interact with AppStream streaming applications, or you can customize it (using HTML, JavaScript, and CSS) with custom launch parameters.

The AppStream Chrome Application runs on Chrome OS version 37 and higher, on Chrome desktop browsers for Windows, Mac OS X, and Linux, and on Chromebooks. Chrome mobile and other HTML 5 web browsers are not currently supported. The application is available in the Chrome Web Store (visit Appstream Chrome App) and can be launched via chrome://apps.

The AppStream SDK is available at no charge. As detailed in the AppStream Pricing page, you also have access to up to 20 hours of streaming per month for 12 months as part of the AWS Free Tier. You will also have to register for a Chrome Developer Account at a cost of $5 (paid to Google, not to AWS).

-- Jeff;

by Jeff Barr (awseditor@amazon.com) at September 16, 2014 07:00 PM

StackOverflow

Output path is shared between the same module error

When I try to compile any class in my project I get the error below:

Error scala: Output path .../eval/target/test-classes is shared between: Module 'eval' tests, Module 'eval' tests
      Output path .../eval/target/classes is shared between: Module 'eval' production, Module 'eval' production
      Please configure separate output paths to proceed with the compilation.

I've seen how to set the output path in IDEA and I've done it. But as the error claims that it is shared between the same module I couldn't solve it.

Obs.: Using Maven and IntelliJ IDEA.

Please, can anyone help?

by user2682382 at September 16, 2014 06:48 PM

Futures somehow slower then agents?

The following code does essentially just let you execute something like (function (range n)) in parallel.

(experiment-with-agents 10000 10 #(filter prime? %))

This for example finds the prime numbers between 0 and 10000 with 10 agents.

(experiment-with-futures 10000 10 #(filter prime? %))

Same just with futures.

Now the problem is that the solution with futures doesn't run faster with more futures. Example:

; Futures
(time (experiment-with-futures 10000 1 #(filter prime? %)))
"Elapsed time: 33417.524634 msecs"

(time (experiment-with-futures 10000 10 #(filter prime? %)))
"Elapsed time: 33891.495702 msecs"

; Agents
(time (experiment-with-agents 10000 1 #(filter prime? %)))
"Elapsed time: 33048.80492 msecs"

(time (experiment-with-agents 10000 10 #(filter prime? %)))
"Elapsed time: 9211.864133 msecs"

Why? Did I do something wrong (probably, new to Clojure and just playing around with stuff^^)? Because I thought that futures are actually prefered in that scenario.

Source:

(defn setup-agents
  [coll-size num-agents]
  (let [step (/ coll-size num-agents)
        parts (partition step (range coll-size))
        agents (for [_ (range num-agents)] (agent []) )
        vect (map #(into [] [%1 %2]) agents parts)]
    (vec vect)))

(defn start-agents
  [coll f]
  (for [[agent part] coll] (send agent into (f part))))

(defn results
  [agents]
  (apply await agents)
  (vec (flatten (map deref agents))))

(defn experiment-with-agents
  [coll-size num-agents f]
  (-> (setup-agents coll-size num-agents)
      (start-agents f)
      (results)))

(defn experiment-with-futures
  [coll-size num-futures f]
  (let [step (/ coll-size num-futures)
        parts (partition step (range coll-size))
        futures (for [index (range num-futures)] (future (f (nth parts index))))]
    (vec (flatten (map deref futures)))))

by Saytiras at September 16, 2014 06:46 PM

scalaz stream iterate each line and map it to a view object for a large file and return an iterator

I have a very large file and each line can be parsed into a view object. However, I want to return a iterator[A] instead of collection, so it can have better memory characteristics for the large file parsing.

factory.createContainer(line: String): Foo = .......

def parse: Iterator[Foo] = {
io.linesR("src/test/resources/largedummy.txt")
    .map(line => factory.createContainer(line))
    .to(.......)

    // I am not sure how can write it in here to return a Iterator[Foo]
}

Many thanks in advance

by Cloud tech at September 16, 2014 06:45 PM

/r/emacs

/r/compsci

Need help in choosing a project to do for my application resume

So I will be applying in a month or so to some colleges for CS, and I figured it will help my interviews if i had a project to show that I am interested in the major I am pursuing. I have minimal experience in CS so if you can help me choose something that I can learn and complete by next month or less, it will be greatly appreciated.

submitted by DuckDown19
[link] [4 comments]

September 16, 2014 06:44 PM

Lobsters

Dave Winer

StackOverflow

How do I make a trait to mix in with an object that extends MappedLongForeignKey, that will override def asHtml and def validSelectValues?

I have defined my model as follows:

object Curr extends Curr with LongKeyedMetaMapper[Curr] with CRUDify[Long, Curr] {

}
class Curr extends LongKeyedMapper[Curr] with IdPK with CreatedUpdated {
    def getSingleton = Curr 
    object code extends MappedString(this, 100)
    object name extends MappedString(this, 100)
}

object Country extends Country with LongKeyedMetaMapper[Country] with CRUDify[Long, Country] {
}
class Country extends LongKeyedMapper[Country] with IdPK with CreatedUpdated {
    def getSingleton = Country 
    object name extends MappedString(this, 100)
    object currid extends MappedLongForeignKey(this, Curr) {
       override def asHtml = { 
           <span>{Curr.find(By(Curr.id, this)).map(c => (c.name + " " + c.code)).openOr(Text(""))}</span> 
       } 
       override def validSelectValues: Box[List[(Long, String)]] = 
        Full(Curr.findAll(OrderBy(Curr.name, Ascending)).map(c => (c.id.is, c.code.is))) 
    }
}

I will have many such models, and I want to remove the redundancy of defining asHtml and validSelectValues for the many models that will have foreign keys. I figured I could do this with a trait MyField that would mix in to my model as follows:

object currid extends {val MyModel = Curr } MappedLongForeignKey(this, Curr) with MyField[Curr] {

with the trait being defined something like:

trait MyField[T <: LongKeyedMetaMapper[T] with IdPK] {
  val MyModel: T
  override def asHtml = { 
    <span>{MyModel.find(By(MyModel.id, this)).map(c => (c.name + " " + c.name)).openOr(Text(""))}</span> 
  } 
  override def validSelectValues: Box[List[(Long, String)]] = 
    Full(MyModel.findAll(OrderBy(MyModel.name, Ascending)).map(c => (c.id.is, c.name.is))) 
}

My trait, as written above, does not work. Here is the error that the compiler generates:

No implicit view available from net.liftweb.mapper.MyField[T] => Long.
[error]     <span>{MyModel.find(By(MyModel.id, this)).map(c => (c.name + " " + c.name)).openOr(Text(""))}</span> 
[error]                           ^
value name is not a member of type parameter T
[error]     Full(MyModel.findAll(OrderBy(MyModel.name, Ascending)).map(c => (c.id.is, c.name.is))) 
[error]                                          ^

I will make sure that each MyModel will have a name member. Can anyone advise on how to implement this trait?

Thanks!

by Shafique Jamal at September 16, 2014 06:15 PM

/r/compsci

Lobsters

StackOverflow

Scala: Merge map

How can I merge maps like below:

Map1 = Map(1 -> Class1(1), 2 -> Class1(2))
Map2 = Map(2 -> Class2(1), 3 -> Class2(2))

After merged.

Merged = Map( 1 -> List(Class1(1)), 2 -> List(Class1(2), Class2(1)), 3 -> Class2(2))

Can be List, Set or any other collection who has size attribute.

by Robinho at September 16, 2014 05:32 PM

/r/netsec

Lobsters

StackOverflow

Gradle Fails to Compile Basic Scala Project

I cannot seem to get gradle to properly compile a simple Scala project. The directory looks like:

.
├── build.gradle
└── src
    └── Main.scala

The build.gradle is simply:

apply plugin: 'scala'

sourceSets {
    main {
        scala {
            srcDir 'src'
        }
    }
}

It fails with this charming little error:

* Exception is:
org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':compileScala'.
        at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:68)
        at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:46)
        at org.gradle.api.internal.tasks.execution.PostExecutionAnalysisTaskExecuter.execute(PostExecutionAnalysisTaskExecuter.java:34)
        at org.gradle.api.internal.changedetection.CacheLockHandlingTaskExecuter$1.run(CacheLockHandlingTaskExecuter.java:34)
        at org.gradle.internal.Factories$1.create(Factories.java:22)
        at org.gradle.cache.internal.DefaultCacheAccess.longRunningOperation(DefaultCacheAccess.java:179)
        at org.gradle.cache.internal.DefaultCacheAccess.longRunningOperation(DefaultCacheAccess.java:232)
        at org.gradle.cache.internal.DefaultPersistentDirectoryStore.longRunningOperation(DefaultPersistentDirectoryStore.java:138)
        at org.gradle.api.internal.changedetection.DefaultTaskArtifactStateCacheAccess.longRunningOperation(DefaultTaskArtifactStateCacheAccess.java:83)
        at org.gradle.api.internal.changedetection.CacheLockHandlingTaskExecuter.execute(CacheLockHandlingTaskExecuter.java:32)
        at org.gradle.api.internal.tasks.execution.SkipUpToDateTaskExecuter.execute(SkipUpToDateTaskExecuter.java:55)
        at org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:57)
        at org.gradle.api.internal.tasks.execution.SkipEmptySourceFilesTaskExecuter.execute(SkipEmptySourceFilesTaskExecuter.java:41)
        at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:51)
        at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:52)
        at org.gradle.api.internal.tasks.execution.ExecuteAtMostOnceTaskExecuter.execute(ExecuteAtMostOnceTaskExecuter.java:42)
        at org.gradle.api.internal.AbstractTask.executeWithoutThrowingTaskFailure(AbstractTask.java:247)
        at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor.executeTask(DefaultTaskPlanExecutor.java:52)
        at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor.processTask(DefaultTaskPlanExecutor.java:38)
        at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor.process(DefaultTaskPlanExecutor.java:30)
        at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter.execute(DefaultTaskGraphExecuter.java:83)
        at org.gradle.execution.SelectedTaskExecutionAction.execute(SelectedTaskExecutionAction.java:29)
        at org.gradle.execution.DefaultBuildExecuter.execute(DefaultBuildExecuter.java:61)
        at org.gradle.execution.DefaultBuildExecuter.access$200(DefaultBuildExecuter.java:23)
        at org.gradle.execution.DefaultBuildExecuter$2.proceed(DefaultBuildExecuter.java:67)
        at org.gradle.api.internal.changedetection.TaskCacheLockHandlingBuildExecuter$1.run(TaskCacheLockHandlingBuildExecuter.java:31)
        at org.gradle.internal.Factories$1.create(Factories.java:22)
        at org.gradle.cache.internal.DefaultCacheAccess.useCache(DefaultCacheAccess.java:124)
        at org.gradle.cache.internal.DefaultCacheAccess.useCache(DefaultCacheAccess.java:112)
        at org.gradle.cache.internal.DefaultPersistentDirectoryStore.useCache(DefaultPersistentDirectoryStore.java:130)
        at org.gradle.api.internal.changedetection.DefaultTaskArtifactStateCacheAccess.useCache(DefaultTaskArtifactStateCacheAccess.java:79)
        at org.gradle.api.internal.changedetection.TaskCacheLockHandlingBuildExecuter.execute(TaskCacheLockHandlingBuildExecuter.java:29)
        at org.gradle.execution.DefaultBuildExecuter.execute(DefaultBuildExecuter.java:61)
        at org.gradle.execution.DefaultBuildExecuter.access$200(DefaultBuildExecuter.java:23)
        at org.gradle.execution.DefaultBuildExecuter$2.proceed(DefaultBuildExecuter.java:67)
        at org.gradle.execution.DryRunBuildExecutionAction.execute(DryRunBuildExecutionAction.java:32)
        at org.gradle.execution.DefaultBuildExecuter.execute(DefaultBuildExecuter.java:61)
        at org.gradle.execution.DefaultBuildExecuter.execute(DefaultBuildExecuter.java:54)
        at org.gradle.initialization.DefaultGradleLauncher.doBuildStages(DefaultGradleLauncher.java:158)
        at org.gradle.initialization.DefaultGradleLauncher.doBuild(DefaultGradleLauncher.java:113)
        at org.gradle.initialization.DefaultGradleLauncher.run(DefaultGradleLauncher.java:81)
        at org.gradle.launcher.cli.ExecuteBuildAction.run(ExecuteBuildAction.java:38)
        at org.gradle.launcher.exec.InProcessGradleLauncherActionExecuter.execute(InProcessGradleLauncherActionExecuter.java:39)
        at org.gradle.launcher.exec.InProcessGradleLauncherActionExecuter.execute(InProcessGradleLauncherActionExecuter.java:25)
        at org.gradle.launcher.cli.RunBuildAction.run(RunBuildAction.java:50)
        at org.gradle.api.internal.Actions$RunnableActionAdapter.execute(Actions.java:137)
        at org.gradle.launcher.cli.CommandLineActionFactory$ParseAndBuildAction.execute(CommandLineActionFactory.java:201)
        at org.gradle.launcher.cli.CommandLineActionFactory$ParseAndBuildAction.execute(CommandLineActionFactory.java:174)
        at org.gradle.launcher.cli.CommandLineActionFactory$WithLogging.execute(CommandLineActionFactory.java:170)
        at org.gradle.launcher.cli.CommandLineActionFactory$WithLogging.execute(CommandLineActionFactory.java:139)
        at org.gradle.launcher.cli.ExceptionReportingAction.execute(ExceptionReportingAction.java:33)
        at org.gradle.launcher.cli.ExceptionReportingAction.execute(ExceptionReportingAction.java:22)
        at org.gradle.launcher.Main.doAction(Main.java:48)
        at org.gradle.launcher.bootstrap.EntryPoint.run(EntryPoint.java:45)
        at org.gradle.launcher.Main.main(Main.java:39)
        at org.gradle.launcher.bootstrap.ProcessBootstrap.runNoExit(ProcessBootstrap.java:50)
        at org.gradle.launcher.bootstrap.ProcessBootstrap.run(ProcessBootstrap.java:32)
        at org.gradle.launcher.GradleMain.main(GradleMain.java:26)
Caused by: java.lang.NullPointerException: Cannot invoke method withInputStream() on null object
        at org.gradle.api.internal.project.AntBuilderDelegate.taskdef(DefaultIsolatedAntBuilder.groovy:136)
        at org.gradle.api.internal.tasks.scala.AntScalaCompiler$_execute_closure1.doCall(AntScalaCompiler.groovy:62)
        at org.gradle.api.internal.ClosureBackedAction.execute(ClosureBackedAction.java:58)
        at org.gradle.util.ConfigureUtil.configure(ConfigureUtil.java:130)
        at org.gradle.util.ConfigureUtil.configure(ConfigureUtil.java:91)
        at org.gradle.util.ConfigureUtil$configure.call(Unknown Source)
        at org.gradle.api.internal.project.DefaultIsolatedAntBuilder.execute(DefaultIsolatedAntBuilder.groovy:112)
        at org.gradle.api.internal.project.IsolatedAntBuilder$execute.call(Unknown Source)
        at org.gradle.api.internal.tasks.scala.AntScalaCompiler.execute(AntScalaCompiler.groovy:61)
        at org.gradle.api.internal.tasks.scala.AntScalaCompiler.execute(AntScalaCompiler.groovy)
        at org.gradle.api.internal.tasks.scala.DefaultScalaJavaJointCompiler.execute(DefaultScalaJavaJointCompiler.java:35)
        at org.gradle.api.internal.tasks.scala.DefaultScalaJavaJointCompiler.execute(DefaultScalaJavaJointCompiler.java:25)
        at org.gradle.api.internal.tasks.scala.DelegatingScalaCompiler.execute(DelegatingScalaCompiler.java:31)
        at org.gradle.api.internal.tasks.scala.DelegatingScalaCompiler.execute(DelegatingScalaCompiler.java:22)
        at org.gradle.api.internal.tasks.compile.IncrementalJavaCompilerSupport.execute(IncrementalJavaCompilerSupport.java:33)
        at org.gradle.api.internal.tasks.compile.IncrementalJavaCompilerSupport.execute(IncrementalJavaCompilerSupport.java:23)
        at org.gradle.api.tasks.scala.ScalaCompile.compile(ScalaCompile.java:131)
        at org.gradle.api.internal.BeanDynamicObject$MetaClassAdapter.invokeMethod(BeanDynamicObject.java:216)
        at org.gradle.api.internal.BeanDynamicObject.invokeMethod(BeanDynamicObject.java:122)
        at org.gradle.api.internal.CompositeDynamicObject.invokeMethod(CompositeDynamicObject.java:147)
        at org.gradle.api.tasks.scala.ScalaCompile_Decorated.invokeMethod(Unknown Source)
        at org.gradle.util.ReflectionUtil.invoke(ReflectionUtil.groovy:23)
        at org.gradle.api.internal.project.taskfactory.AnnotationProcessingTaskFactory$4.execute(AnnotationProcessingTaskFactory.java:161)
        at org.gradle.api.internal.project.taskfactory.AnnotationProcessingTaskFactory$4.execute(AnnotationProcessingTaskFactory.java:156)
        at org.gradle.api.internal.AbstractTask$TaskActionWrapper.execute(AbstractTask.java:472)
        at org.gradle.api.internal.AbstractTask$TaskActionWrapper.execute(AbstractTask.java:461)
        at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:60)
        ... 57 more


BUILD FAILED

My gradle --version is:

------------------------------------------------------------
Gradle 1.3
------------------------------------------------------------

Gradle build time: Tuesday, November 20, 2012 11:37:38 AM UTC
Groovy: 1.8.6
Ant: Apache Ant(TM) version 1.8.4 compiled on May 22 2012
Ivy: 2.2.0
JVM: 1.6.0_24 (Sun Microsystems Inc. 20.0-b12)
OS: Linux 3.2.0-29-generic amd64

What in the world is going wrong?

by Travis Gockel at September 16, 2014 05:10 PM

Planet Clojure

Weekly Update: Clojure, Clojure, Clojure and a nasty cold

In this weekly update, I will just give you a very brief update on what I’ve been doing. Most notably, I have refactored the Clojure version of my BirdWatch application to use Stuart Sierra’s component library. Other than that, I am calling in sick for the week.

BirdWatch in action


Above you can see what the latest all-Clojure version looks like. I have added two features: a) aggregation and sorting by reach and b) a counter for the total number of tweets indexed, which is updated every ten seconds. You can see a live version by clicking on the animated GIF.

Componentizing BirdWatch

Over the weekend, I have componentized the server side of BirdWatch. In terms of functionality, I had been content with the earlier version, but in terms of the structure, I started to recognize something that I had learned to dread: an application where everything depended on everything. Okay, at least there were no circular dependencies (any more), but there were still way too many dependencies between namespaces, in a way that I had seen way too often in past projects to lull myself into being satisfied. I don’t even like real spaghetti all much much.

The component library offers help by allowing us to structure an application into components (and boundaries between them) and then wiring them together by means of dependency injection. It took a moment or two to wrap my head around it, but once I had, I was convinced that I wanted to reap the benefits of DI and rewrite my application. The library’s description contains a fair warning that it is somewhat difficult to refactor an existing application to use the it throughout, but I can now say for sure that it can be done and that it’s worth it.

As a result I have an application where ONLY the main namespace depends on the different components it wires together via dependency injection. Other than that, the different namespaces know nothing about each other. Communication between the different components takes places via core.async channels, which all live in a single component. This component holding the channels is then injected into the other components as a dependency.

I find this new architecture beautiful and I will surely write more about it soon. Until then, I could use your help. I am really just getting started with Clojure, with this being the first real application that I write in it. I would love to have more knowledgeable Clojurians review the code and point me at possible improvements. Right now I would especially appreciate that for the server-side code.

Are Clojure developers happier?

I recently read an article that Clojure developers are the happiest and while I cannot really say that the article provides hard evidence, I can say for sure that I for one enjoy programming in Clojure more than I have enjoyed programming in other languages for a while. Also I have found the community really helpful. Yesterday, I had a problem I couldn’t figure out myself. After scratching my head for way too long, it only took a few minutes after joining the Clojure room on IRC until I was happily coding again.

Clojure Resources

I have recently liberated my accumulated list of bookmarks on Clojure-related stuff and since I have added every new link and useful link I have come across. I am now working on making it a habit of writing a sentence or two about all new resources I discover. In the past couple of days, I have been really happy to see that people seem to find this compilation useful. Please go check it out if you haven’t already: Clojure-Resources on GitHub.

Closing remarks

Okay, back to bed, I need to get rid of this nasty cold ASAP. I have stuff to do, other than coughing. The bugs came without invitation last week and now they don’t seem inclined to leave. But on the upside, I went to the doctor today and he gave me a prescription for three different pharmaceuticals and assured me that I’ll likely survive.

Have a great remaining week, Matthias

by Matthias Nehlsen at September 16, 2014 05:06 PM

StackOverflow

Is it possible to browse the source of OpenJDK online?

Is it possible to browse the source code of OpenJDK online, just like I can do with SourceForge's projects? I never used Mercury before, so I felt confused.

(Note: I don't want to download the source. I just want to browse it online, to see how some methods are implemented.)

by Hosam Aly at September 16, 2014 05:04 PM

/r/emacs

Portland Pattern Repository

CompsciOverflow

How does one efficiently produce all binary sequences with an equal number of 0's and 1's?

A binary sequence of length $n$ is just an ordered sequence $x_1,\ldots,x_n$ so that each $x_j$ is either $0$ or $1$. In order to generate all such binary sequences, one can use the obvious binary tree structure in the following way: the root is "empty", but each left child corresponds to the addition of $0$ to the existing string and each right child to a $1$. Now, each binary sequence is simply a path of length $n+1$ starting at the root and terminating at a leaf.

Here's my question:

Can we do better if we only want to generate all binary strings of length $2n$ which have precisely $n$ zeros and $n$ ones?

By "can we do better", I mean we should have lower complexity than the silly algorithm which first builds the entire tree above and then tries to find those paths with an equal number of "left" and "right" edges.

by Vidit Nanda at September 16, 2014 04:58 PM

Lobsters

QuantOverflow

How is the default probability implied from market implied CDS spreads for CVA/DVA calculation?

From point 38 on P.17 the default probability can be implied from market implied CDS spreads. "Macro Surface" method is mentioned, but I cannot get any clue of what it is? Where do I get the acedemic reference for that?

Also what is the commonly used methodology to imply default probability for CVA/DVA calculation?

The article "Credit and Debit Valuation Adjustment" can be seen in http://www.ivsc.org/sites/default/files/IVSC%20CVA%20-DVA%20%20ED_0.pdf

by Dennis at September 16, 2014 04:54 PM

Lobsters

TheoryOverflow

An easy case of SAT that is not easy for tree resolution

Is there a natural class $C$ of CNF formulas - preferably one that has previously been studied in the literature - with the following properties:

  • $C$ is an easy case of SAT, like e.g. Horn or 2-CNF, i.e., membership in $C$ can be tested in polynomial time, and formulas $F\in C$ can be tested for satisfiability in polynomial time.
  • Unsatisfiable formulas $F\in C$ are not known to have short (polynomial size) tree-like resolution refutations. Even better would be: there are unsatisfiable formulas in $C$ for which a super-polynomial lower bound for tree-like resolution is known.
  • On the other hand, unsatisfiable formulas in $C$ are known to have short proofs in some stronger proof system, e.g. in dag-like resolution or some even stronger system.

$C$ should not be too sparse, i.e., contain many formulas with $n$ variables, for every (or at least for most values of) $n\in \mathbb{N}$. It should also be non-trivial, in the sense of containing satisfiable as well as unsatisfiable formulas.

The following approach to solving an arbitrary CNF formula $F$ should be meaningful: find a partial assignment $\alpha$ s.t. the residual formula $F\alpha$ is in $C$, and then apply the polynomial time algorithm for formulas in $C$ to $F\alpha$. Therefore I would like other answers besides the all-different constraints from the currently accepted answer, as I think it is rare that an arbitrary formula will become an all-different constraint after applying a restriction.

by Jan Johannsen at September 16, 2014 04:49 PM

CompsciOverflow

How to calculate or estimate how long it takes to solve a recurrence relation?

I am trying to understand how to solve complex recurrence relations and whether there is a general method or technique to help me. That being said I am not talking about recurrence relations that can be solved using the (generalized) Master Theorem or the Akra–Bazzi method.

This is by no means a "I have some homework to solve, please solve it for me" question. It is more at a conceptual level. I want to be able to solve and "see" through these relations with ease.

How do you efficiently calculate the time needed to solve a recurrence relation?

Is it something that, simply put, takes experience and personal intuition? Is there some cannon method that I am not aware of? Moreover, how do you go on to calculate the same relations for cases where you have memoization?

by KXK at September 16, 2014 04:45 PM

In average, how many source files does a large proprietary software have? (if possible with a reference) [on hold]

I was checking the number of files Linux kernel Open Source Software project has around 56K. Linux has more than 15 years of history and currently more than 3K developers. Most of the time, this is not the reality of proprietary software projects. Do you have any idea, in average how many source files does a large proprietary software have? (if possible, would you know a reference in literature talking about this topic)?

My goal is to have an average in the market. Something more tangible than just saying: "Yes, logically comparing Linux with proprietary software is not fair". Even if it is logical, I would like to have a reference to support it.

by Samuel Donadelli at September 16, 2014 04:20 PM

Lobsters

StackOverflow

Implementing client server instant messaging using zmq library. I have basic approach of handling each client in seprate threads

Implementing client server instant messaging using zmq library. I have basic approach of handling each client in separate threads. Should I use dealers and routers or REQ-REP fashion.How would I identify client IDs? How would I handle each client in separate thread with communication between different clients.

by monsterrrrr at September 16, 2014 04:10 PM

Lobsters

TheoryOverflow

Entropy of sum of dependent random variables

Can you help me find the entropy of the sum of two dependent random variables i.e find

$h(X+Y)$ when X and Y are depenendent.

by user12345 at September 16, 2014 04:06 PM

/r/netsec

StackOverflow

Dead letters in akka remoting (scala)

I am encountering dead letters when I try to run a simple example using remote akka actors on my localhost.

This is my build.sbt file for the remote project.

name := "HelloRemote"

version := "1.0"

scalaVersion := "2.11.2"

resolvers += "Typesafe Repository" at "http://repo.typesafe.com/typesafe/releases/"

libraryDependencies ++= Seq(
  "com.typesafe.akka" %% "akka-actor" % "2.3.6",
  "com.typesafe.akka" %% "akka-remote" % "2.3.6"
)

This is my application.conf file for remote system.

akka {
  actor {
    provider = "akka.remote.RemoteActorRefProvider"
   }
   remote {
     enabled-transports = ["akka.remote.netty.tcp"]
     netty.tcp {
       hostname = "127.0.0.1"
       port = 5100
     }
   }
}

This is my HelloRemote.scala file for the remote system.

package remote

import akka.actor._

object HelloRemote extends App  {
  val system = ActorSystem("HelloRemoteSystem")
  val remoteActor = system.actorOf(Props[RemoteActor], name = "RemoteActor")
  remoteActor ! "The RemoteActor is alive"
}

class RemoteActor extends Actor {
  def receive = {
    case msg: String =>
        println(s"RemoteActor received message '$msg'")
        sender ! "Hello from the RemoteActor"
  }
}

For my local system, the build.sbt file is as follows.

name := "HelloLocal"

version := "1.0"

scalaVersion := "2.11.2"

resolvers += "Typesafe Repository" at "http://repo.typesafe.com/typesafe/releases/"

libraryDependencies ++= Seq(
  "com.typesafe.akka" %% "akka-actor" % "2.3.6",
  "com.typesafe.akka" %% "akka-remote" % "2.3.6"
)

The application.conf file for my local system is

akka {
  actor {
    provider = "akka.remote.RemoteActorRefProvider"
  }
  remote {
    enabled-transports = ["akka.remote.netty.tcp"]
    netty.tcp {
      hostname = "127.0.0.1"
      port = 0
    }
  }
}

And the HelloLocal.scala file for my local system is

package local

import akka.actor._

object Local extends App {

  val system = ActorSystem("LocalSystem")
  val localActor = system.actorOf(Props[LocalActor], name = "LocalActor")  // the local actor
  localActor ! "START"                                                     // start the action

}

class LocalActor extends Actor {

  // create the remote actor
  val remote = context.actorSelection("akka.tcp://HelloRemoteSystem@127.0.0.1:5100/user/RemoteActor")
  var counter = 0

  def receive = {
    case "START" =>
        remote ! "Hello from the LocalActor"
    case msg: String =>
        println(s"LocalActor received message: '$msg'")
        if (counter < 5) {
            sender ! "Hello back to you"
            counter += 1
        }
  }
}

When I first run HelloRemote.scala The RemoteActor is alive gets printed as expected and then I immediately get the error

[INFO] [09/16/2014 10:52:47.585] [HelloRemoteSystem-akka.actor.default-dispatche
r-4] [akka://HelloRemoteSystem/deadLetters] Message [java.lang.String] from Acto
r[akka://HelloRemoteSystem/user/RemoteActor#1051175275] to Actor[akka://HelloRem
oteSystem/deadLetters] was not delivered. [1] dead letters encountered. This log
ging can be turned off or adjusted with configuration settings 'akka.log-dead-le
tters' and 'akka.log-dead-letters-during-shutdown'.

I get a similar error when I run the local system HelloLocal.scala and then nothing happens. Am I doing something wrong here?

by user2893084 at September 16, 2014 03:57 PM

High Scalability

Sponsored Post: Apple, Flipboard, All Your Base, Scalyr, FoundationDB, AiScaler, Aerospike, AppDynamics, ManageEngine, Site24x7

Who's Hiring?

  • Apple has multiple openings. Changing the world is all in a day's work at Apple. Imagine what you could do here. 
    • Siri Operations Developer. Apple is looking for talented developers to help build the next generation internal cloud platform for Siri. This person should be excited about solving difficult distributed systems problems as well as constantly improving user-experience. This person will be working with a highly technical and motivated team solving the hard problems. Please apply here.
    • Site Reliability Engineer. The Apple Pay Site Reliability Team is hiring for multiple roles focused on the front line customer experience and the back end integration of Apple systems with our Network and Banking partners. Please apply here.
    • Senior Software Engineer, iTunes Infrastructure. Hands-on senior software engineering for the iTunes digital media supply chain engineering team. We are looking for a self starting, energetic individual who is not afraid to question assumptions and with excellent written and oral communication skills. Please apply here
    • iTunes - Content Management Tools Engineer. The candidate should have several years experience developing large-scale web-based applications using object-oriented languages. Excellent understanding of relational databases and data-modeling techniques is also a must. Please apply here

  • Flipboard's Site Reliability Engineering Team is hiring! This team offers great challenges solving unique problems unlike any you have seen!  They work exclusively in the cloud, ensuring a highly available and performant product to millions of users daily.  If you have a passion for large-scale systems, next generation provisioning and orchestration tools apply here.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.

Fun and Informative Events

  • All Your Base is the only curated database conference of its kind in the UK. Listen to talks from database creators, industry leaders and developers working at the coal face on where to store and how to handle your data. Book tickets.

Cool Products and Services

  • FoundationDB launches SQL Layer. SQL Layer is an ANSI SQL engine that stores its data in the FoundationDB Key-Value Store, inheriting its exceptional properties like automatic fault tolerance and scalability. It is best suited for operational (OLTP) applications with high concurrency. Users of the Key Value store will have free access to SQL Layer. SQL Layer is also open source, you can get started with it on GitHub as well.

  • Better, Faster, Cheaper: Pick Three. Scalyr is your universal tool for visibility into your production systems. Log aggregation, server metrics, monitoring, alerting, dashboards, and more. Not just “hosted grep” or “hosted graphs”; our columnar data store enables enterprise-grade functionality with sane pricing and insane performance. Trusted by in-the-know companies like Codecademy – get on board!

  • Whitepaper Clarifies ACID Support in Aerospike. In our latest whitepaper, author and Aerospike VP of Engineering & Operations, Srini Srinivasan, defines ACID support in Aerospike, and explains how Aerospike maintains high consistency by using techniques to reduce the possibility of partitions. 

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Cloud deployable. Free instant trial, no sign-up required.  http://aiscaler.com/

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

by Todd Hoff at September 16, 2014 03:56 PM

/r/emacs

Lobsters

StackOverflow

How to understand the sentence "or that it evaluates to bottom"?

I see these sentences on the book "functional programming in scala":

If the evaluation of an expression runs forever or throws an error instead of returning a definite value, we say that the expression does not terminate, or that it evaluates to bottom. A function f is strict if the expression f(x) evaluates to bottom for all x that evaluate to bottom.

Sorry for my poor English, I found myself can't understand this sentence well:

or that it evaluates to bottom

Two parts I can't understand:

  1. Is "it evaluates to bottom" the same to "the expression does not terminate", or opposite?
  2. What does "bottom" mean here?

Thanks

by Freewind at September 16, 2014 03:39 PM

Log in with Friend, from the server side?

I'm adding Friend to my Compojure app for authentication. I'm using workflows/interactive-form. I have a form that lets me register a new user (JS POST to /register), and I have a form that lets me log in (JS POST to /login), and they both work great so far.

I want new user registration to also log in the user, naturally. Is there an easy way to say "this user is now logged-in" on the server side?

(The client side AJAX call to /register has the username and password, so I could simply have it re-encode the form data in the way that /login expects, and submit a second AJAX request for the log in. That seems awkward, though.)

I think that this might have something to do with :workflows but that part of the documentation is a bit puzzling to me.

by user3642070 at September 16, 2014 03:39 PM

Twitter

Hello Pants build

As codebases grow, they become increasingly difficult to work with. Builds get ever slower and existing tooling doesn’t scale. One solution is to keep splitting the code into more and more independent repositories — but you end up with hundreds of free-floating codebases with hard-to-manage dependencies. This makes it hard to discover, navigate and share code, which can affect developer productivity.

Another solution is to have a single large, unified codebase. We’ve found that this promotes better engineering team cohesion and collaboration, which results in greater productivity and happiness. But tooling for such structured codebases has been lacking. That’s why we developed Pants, an open source build system written in Python.

Pants models code modules (known as “targets”) and their dependencies in BUILD files — in a manner similar to Google’s internal build system. This allows it to only build the parts of the codebase you actually need, ignoring the rest of the code. That’s a key requirement for scaling large, unified codebases.

Pants started out in 2010 as an internal tool here, and was originally just a frontend to generate build.xml files for the Ant build tool, hence the name (a contraction of “Python Ant”). Pants grew in capability and complexity, and became the build tool for the twitter/commons open source libraries, and hence became open source itself.

In 2012, Foursquare began using Pants internally, and Foursquare engineers picked up the Pants development mantle, adding Scala support, build artifact caching and many other features. Since then, several more engineering teams, including those at Urban Compass and Oscar, have integrated Pants into their codebases. Most recently, Square began to use Pants, and has contributed significantly to its development.

As a result, Pants is a true independent open source project with collaborators across companies and a growing development community. It now lives in a standalone GitHub repo at github.com/pantsbuild/pants and we’ve welcomed more committers to the project. 

Among Pants’ current strengths:

  • Builds Java, Scala and Python.
  • Adding support for new languages is straightforward.
  • Supports code generation: thrift, protocol buffers, custom code generators.
  • Resolves external JVM and Python dependencies.
  • Runs tests.
  • Spawns Python and Scala REPLs with appropriate load paths.
  • Creates deployable packages.
  • Scales to large repos with many interdependent modules.
  • Designed for incremental builds.
  • Support for local and distributed caching.
  • Especially fast for Scala builds, compared to alternatives.
  • Builds standalone python executables (PEX files).
  • Has a plugin system to add custom features and override stock behavior.
  • Runs on Linux and Mac OS X.

If your codebase is growing beyond your toolchain’s ability to scale, but you’re reluctant to split it up, you might want to give Pants a try. It may be of particular interest if you have complex dependencies, generated code and custom build steps. Pants is still a young and evolving open source project. We constantly strive to make it easier to use. If you’re interested in using or learning from Pants, reach out to the community on the developer mailing list and follow @pantsbuild for updates.

September 16, 2014 03:36 PM

QuantOverflow

Why is the duration of a bond is important?

I know what it measures, but now in the age of computers why is it useful? If the yield changes, we could just simply plug the new yield into a program, or excel or something like that, and calculate the new price of the bond.

by kanbhold at September 16, 2014 03:33 PM

Planet Theory

TR14-120 | Proof Complexity of Resolution-based QBF Calculi | Mikolas Janota, Olaf Beyersdorff, Leroy Chew

Proof systems for quantified Boolean formulas (QBFs) provide a theoretical underpinning for the performance of important QBF solvers. However, the proof complexity of these proof systems is currently not well understood and in particular lower bound techniques are missing. In this paper we exhibit a new and elegant proof technique for showing lower bounds in QBF proof systems based on strategy extraction. This technique provides a direct transfer of circuit lower bounds to lengths of proofs lower bounds. We use our method to show the hardness of a natural class of parity formulas for Q-resolution. Variants of the formulas are hard for even stronger systems as long-distance and universal Q-resolution. With a completely different lower bound argument we show the hardness of the prominent formulas of Kleine Büning et al. for the strong expansion-based calculus IR-calc, thus also confirming the hardness of the formulas for Q-resolution. Our lower bounds imply new exponential separations between two different types of resolution-based QBF calculi: proof systems for DPLL-based solvers (Q-resolution, long-distance Q-resolution) and proof systems for expansion-based solvers ($\forall$Exp+Res and its generalizations IR-calc and IRM-calc). The relations between proof systems from the two different classes were not known before.

September 16, 2014 03:31 PM

StackOverflow

Cant use WebJarAsset`s "at" method (play 2.3.4)

I cant figure out why I cant use the WebJarAssets "at" method. My Configuration is as follows..

In my build.sbt:

...
libraryDependencies ++= Seq(
  "org.webjars" %% "webjars-play" % "2.3.0",
  "org.webjars" % "jquery" % "1.11.1",
)
...

In my routes:

GET        /webjars/*file       controllers.WebJarAssets.at(file)
GET        /assets/*file        controllers.Assets.at(path="/public", file)

Then in my main.scala.html I tried to add jquery like this..:

<script src="@routes.WebJarAssets.at(WebJarAssets.locate("jquery.min.js"))" type="text/javascript"></script>

But at this point the "at" method cant be accessed and I dont know why. I actually use play 2.3.4 and I just created this project, so there shouldnt be any conflicting libs..

Is there somebody able to help me with this?

thanks in advance

by muenchnair at September 16, 2014 03:29 PM

Compilation failed: error while loading AnnotatedElement, ConcurrentMap, CharSequence from Java 8 under Scala 2.10?

I'm using the following:

  • Scala 2.10.4
  • Scalatra 2.2.2
  • sbt 0.13.0
  • java 1.8.0
  • casbah 2.7.2
  • scalatra-sbt 0.3.5

I'm frequently running into this error:

21:32:00.836 [qtp1687101938-55] ERROR o.fusesource.scalate.TemplateEngine - Compilation failed:
error: error while loading CharSequence, class file '/Library/Java/JavaVirtualMachines/jdk1.8.0.jdk/Contents/Home/jre/lib/rt.jar(java/lang/CharSequence.class)' is broken
(class java.lang.RuntimeException/bad constant pool tag 18 at byte 10)
error: error while loading ConcurrentMap, class file '/Library/Java/JavaVirtualMachines/jdk1.8.0.jdk/Contents/Home/jre/lib/rt.jar(java/util/concurrent/ConcurrentMap.class)' is broken
(class java.lang.RuntimeException/bad constant pool tag 18 at byte 61)
two errors found
21:38:03.616 [qtp1687101938-56] ERROR o.fusesource.scalate.TemplateEngine - Compilation failed:
error: error while loading AnnotatedElement, class file '/Library/Java/JavaVirtualMachines/jdk1.8.0.jdk/Contents/Home/jre/lib/rt.jar(java/lang/reflect/AnnotatedElement.class)' is broken
(class java.lang.RuntimeException/bad constant pool tag 18 at byte 76)
one error found

Currently I'm running into this when simply trying to call a .count() on my MongoDB collection.

Upon Googling, it seems like it may be caused by dependency issues. The thing is, I'm using Scalatra just to serve an API and actually don't require any of the scalate stuff. I commented out all references to it, but I still get this. Could it be a dependency issue between the libraries I'm using?

Any help appreciated. Thanks!

by jnfr at September 16, 2014 03:12 PM

/r/emacs

Editing SQL in PHP files

I write a lot of SQL code in my PHP files how, can I get the amazing "auto"-indentation of Emacs to work with this?

When indenting the buffer the code should look like this:

<?php $query = " SELECT products.id, products.name FROM products LEFT JOIN persons ON persons.id = products.customerId WHERE products.id > 12" ?> 

In web-mode and php-mode it looks like this:

$query = " SELECT products.id, products.name FROM products LEFT JOIN persons ON persons.id = products.customerId WHERE products.id > 12" ?> 

If this isn't possible, can I somehow enable manual indentation on rows that include SQL code?

submitted by oskwish
[link] [4 comments]

September 16, 2014 03:12 PM

StackOverflow

Can't unpickle a joda DateTime

I'm trying to pickle/unpickle a joda DateTime instance to/from json with pickling. With pickling 0.8.0, if I don't supply a custom pickler, I get

JSONPickle({
  "tpe": "org.joda.time.DateTime"
})

When I do:

class DateTimePickler(implicit val format: PickleFormat) extends
  SPickler[DateTime] with Unpickler[DateTime] {
    private val stringUnpickler = implicitly[Unpickler[String]]

    def pickle(picklee: DateTime, builder: PBuilder): Unit = {
      builder.beginEntry(picklee)
      builder.putField("date", b =>
        b.hintTag(stringTag).beginEntry(picklee.toString).endEntry()
      )
      builder.endEntry()
    }

    override def unpickle(tag: => FastTypeTag[_], reader: PReader): DateTime = {
      reader.hintTag(stringTag)
      val tag = reader.beginEntry()
      logger.debug(s"tag is ${tag.toString}")
      val value = stringUnpickler.unpickle(tag, reader).asInstanceOf[String] //can't debug NoSuchElementException: : key not found: value
      logger.debug(s"value is $value")
      reader.endEntry()

      val timeZone = DateTimeZone.getDefault
      DateTime.parse(value).withZone(timeZone) //otherwise the parsed DateTime won't equal the original
    }
  }

  implicit def genDateTimePickler(implicit format: PickleFormat) = new DateTimePickler

I get

JSONPickle({
  "tpe": "org.joda.time.DateTime",
  "date": {
    "tpe": "java.lang.String",
    "value": "2014-09-16T17:59:25.865+03:00"
  }
})

and unpickling fails with NoSuchElementException: : key not found: value. With pickling 0.9.0-SNAPSHOT my specs2 test won't even terminate.

by Yar at September 16, 2014 03:11 PM

Pre-process parameters of a case class constructor without repeating the argument list

I have this case class with a lot of parameters:

case class Document(id:String, title:String, ...12 more params.. , keywords: Seq[String]) 

For certain parameters, I need to do some string cleanup (trim, etc) before creating the object.

I know I could add a companion object with an apply function, but the LAST thing I want is to write the list of parameters TWICE in my code (case class constructor and companion object's apply).

Does Scala provide anything to help me on this?

by sscarduzio at September 16, 2014 03:10 PM

QuantOverflow

How to select optimal betting strategy from backtest?

I have written a model for predicting the winner of UFC fights.

My model calculates the probability of each fighter to win a given match.

I have back tested the model and found it to be very accurate, it predicts the winner around 65% of the time. The model is trained on 3 years of data and then tested out of sample on the past 9 months worth of data.

I am trying to use my model's output (out of sample) and historic book maker odds to come up with an optimal betting strategy. This is based on the Kelly Criterion.

I have 5 parameters that I think will affect the profitability of a betting strategy. They are based around fractions for kelly and handling uncertainty.

What I did is write a program to create ~500k strategies with different weights for each parameter. I then run them through the past 9 months of data to determine their profitability.

From those ~500k I can narrow down the strategies I'm interested in by filtering them by maximum draw down (20%) and minimum ROI (25%), this will bring down the strategies to around 20k.

How can I further narrow down the strategies into an optimal one?

If I take the best strategy (most profit over the 9 months of out of sample data) I worry that it is likely super over fit, and if I take the average of each parameter for each the 20k strategies I worry that this set of parameters may not work well together.

How can I narrow down the ~20k strategies into one that works well and is likely not to be over fit?

Thanks for your help.

by Watson at September 16, 2014 03:09 PM

StackOverflow

Parse Json Sequence without keys in playframework

I am trying to parse a Json object that consists only of an top level array without a key.

import play.api.libs.json._
import play.api.libs.functional.syntax._

case class Name(first: String, last: String)
case class Names(names: Seq[Name])

implicit val NameF = Json.format[Name]

val s = """[{"first": "A", "last": "B"},{"first": "C", "last": "D"},{"first": "E", "last": "F"}]"""

implicit val NF: Reads[Names] = (
    JsPath.read[Seq[Name]]
)(Names.apply _)

<console>:34: error: overloaded method value read with alternatives:
  (t: Seq[Name])play.api.libs.json.Reads[Seq[Name]] <and>
  (implicit r: play.api.libs.json.Reads[Seq[Name]])play.api.libs.json.Reads[Seq[Name]]
 cannot be applied to (Seq[Name] => Names)
            JsPath.read[Seq[Name]]

by Ido Tamir at September 16, 2014 03:08 PM

Is there any working example of sbt plugin which defined with AutoPlugin?

I see there is a AutoPlugin feature in sbt 0.13.5 version, and want to define a simple sbt plugin with it.

But sadly, I followed the document(which is not in detail) also this question which is not actually resolved, without any lucky.

Is there any working example I can try?

by Freewind at September 16, 2014 03:00 PM

Thread-safe in-memory cache

I need a simple thread-safe in-memory cache in my Scala application.

I need a data structure which support this operation from scala.collection.mutable.MapLike:

def getOrElseUpdate(key: A, op: => B)

and I want this operation to be atomic. Is this operation atomic in scala.collection.concurrent.TrieMap? Or should I use some other data structure?

I use Scala 2.10, but will probably upgrade to 2.11 soon.

I don't want to use scala.collection.mutable.SynchronizedMap, since it is deprecated in Scala 2.11.

by Mikael Ståldal at September 16, 2014 02:46 PM

Dave Winer

CompsciOverflow

Emulations of atomic registers and read-modify-write (RMW) primitives in message-passing systems

The well-known ABD algorithm in Sharing Memory Robustly in Message-Passing Systems can emulate atomic, single-writer multi-reader registers in message-passing systems, in the presence of processor or link failures. Looking into the details of this algorithm (in Figure 2, page 133), I found that it implicitly assumes "conditional write" primitives at the side of servers:

case received from w
    <W, label_w>: label_j = max{label_w, label_j}; 
                  send <ACK-W> to w;

Here, the statement label_j = max{label_w, label_j} is equivalent to if (label_j < label_w) then label_j = label_w, requiring the variable label_j maintained by each process $j$ to be monotonic. This in turn needs the if-then "conditional write" primitive.

My first question is:

(1) Is this "conditional write" primitive necessary? Do you know any literature on emulations of atomic, SWMR registers without such primitives?

In the last section of the same paper, titled "Discussion and Further Research", the authors mentioned the emulations of stronger shared memory primitives in message-passing systems in presence of failures. One typical example is read-modify-write (RMW).

My second question is:

(2) Do you know any literature on the emulations of RMW-like primitives?

Google search and a quick glance over the "cited by" papers do not bring me anything specific.

by hengxin at September 16, 2014 02:33 PM

/r/clojure

/r/compsci

StackOverflow

ZeroMQ sending many to one

I have implemented a zmq library using push / pull on windows. There is a server and up to 64 clients running over loopback. Each client can send and receive to the server. There is a thread that waits for each client to connect on a pull zmq socket. Clients can go away at any time.

The server is expected to go down at times and when it comes back up the clients need to reconnect to it.

The problem is that when nothing is connected I have 64 receive threads waiting for a connection. This shows up as a lot of connections in tcpview and my colleagues inform me that this is appearing like a performance/d-dos sort of attack.

So in order to get around that issue I'd like the clients to send some sort of heart beat to the server "hey I'm here" on one socket. However I can't see how to do that with zmq.

Any help would be appreciated.

by user1335325 at September 16, 2014 02:03 PM

Cleaner way to update nested structures

Say I have got following two case classes:

case class Address(street: String, city: String, state: String, zipCode: Int)
case class Person(firstName: String, lastName: String, address: Address)

and the following instance of Person class:

val raj = Person("Raj", "Shekhar", Address("M Gandhi Marg", 
                                           "Mumbai", 
                                           "Maharashtra", 
                                           411342))

Now if I want to update zipCode of raj then I will have to do:

val updatedRaj = raj.copy(address = raj.address.copy(zipCode = raj.address.zipCode + 1))

With more levels of nesting this gets even more uglier. Is there a cleaner way (something like Clojure's update-in) to update such nested structures?

by missingfaktor at September 16, 2014 02:02 PM

Halfbakery

QuantOverflow

Does anyone have a C# implementation of the Barone Adesi Whaley options pricing model?

Thanks. Can't seem to find it through google. Worst case, if you can provide me the code in Java or C++ I can convert it to C#.

by user3914487 at September 16, 2014 01:55 PM

Lobsters

CompsciOverflow

minimum vertex set removal for edge-free graph

I'd like to know the name and the algorithm for the following problem which I'm guessing is a classic, but is slightly different from graph connectivity.

Consider a undirected graph G=(V,E). What is the minimum number of vertices which removal (vertices and their adjacent edges) makes the resulting sub-graph empty of any edge?

For example: If A - B - C, then just need to remove B. If A - B - C and A - C, then need to remove at two vertices (any pair).

For an algo, intuitively I'd proceed by removing first the vertex of highest degree and proceed the same on the remaining graph until there are no edges. Not sure if it gives the min number. For sure, in the worst case I can always go through all possible |V|! removal possibilities and take the min.

by jam123 at September 16, 2014 01:47 PM

StackOverflow

Printing a list or vector of strings

If I have a list of strings:

("String 1" "String 2" "String 3")

or a vector of strings:

["String 1" "String 2" "String 3"]

threading either through (map println) produces this (for lists):

(String 1 String 2 nil String 3 nil nil)

and this for vectors:

(String 1 String 2 String 3 nil nil nil)

What's going on here? Where are the nils coming from? How do I get this?:

String 1 String 2 String 3

(No nils, no parentheses, just newlines.)

by mwfogleman at September 16, 2014 01:46 PM

Regex JSON response Gatling stress tool

Wanting to capture a variable called scanNumber in the http response loking like this:

{"resultCode":"SUCCESS","errorCode":null,"errorMessage":null,"profile":{"fullName":"TestFirstName TestMiddleName TestLastName","memberships":[{"name":"UA Gold Partner","number":"123-456-123-123","scanNumber":"123-456-123-123"}]}}

How can I do this with a regular experssion? The tool I am using is Gatling stress tool (with the Scala DSL)

I have tried to do it like this:

.check(jsonPath("""${scanNumber}""").saveAs("scanNr")))

But I get the error:

---- Errors --------------------------------------------------------------------
> Check extractor resolution crashed: No attribute named 'scanNu      5 (100,0%)
mber' is defined

by Magnus Jensen at September 16, 2014 01:43 PM

What does the ## method do? [duplicate]

This question already has an answer here:

Simple question: What does the ## method do?

I could not get much information about this method. All I could find is that it has something to do with the hashCode method.

Can someone please explain or give a documentation link?

by John Threepwood at September 16, 2014 01:40 PM

Lobsters

StackOverflow

Scala Parboiled 2 currying up some rules

I'd like to create some helper rules that take one rule and add some features to it. For example enforcing that string literals need to be quoted, or adding token position tracking to the token rules / ADT's.

I tried the following syntax (and quite a few permutations).

  def quoted[T](rl: Rule1[T]) = rule {
    '"' ~ rl ~ '"'
  }

It compiles fine but as soon as I wire it up --e.g.,

  def NodeObjPathEntry: Rule1[CNodeObjPathEntry] = rule {
    WhiteSpace ~ quoted(IdentifierStringUnwrapped) ~ ':' ~ (NodeObjArray | NodeObjObj) ~> CNodeObjPathEntry
  }

With the sub-rules :

def IdentifierStringUnwrapped: Rule1[String] = rule {
    clearSB() ~ IdentifierChars ~ push(sb.toString)   
}

 def IdentifierChars = rule {
    Alpha ~ appendSB() ~ zeroOrMore(AlphaNum ~ appendSB())
  }

I get Illegal rule call: quoted[this.String](this.IdentifierStringUnwrapped)

I could commit to an alternative approach: mix in the primitive token parsers, and then create the variants I need. But I really wanna figure out what is going on.

by Hassan Syed at September 16, 2014 01:30 PM

exhaustive patterns

I'm learning ML, can somebody please explain what does it mean exhaustive patterns?

by rookie at September 16, 2014 01:27 PM

/r/compilers

TheoryOverflow

Categories a computer scientist should know about

I am a computer scientist with a CS degree from the 80's. I am learning category theory by myself. I am looking for advice about learning category theory. E.g. it is quite helpful in learning category theory to have examples from a familiar topic like logic or programming languages.

What I know

I have read the answers to this question and the paper Physics, Topology, Logic and Computation: A Rosetta Stone which builds up concepts with parallels between category theory, logic, and theory of computation.

I know about the following categories:

• categories
• monoidal categories
• braided monoidal categories
• closed monoidal categories
• symmetric monoidal categories
• closed braided monoidal categories
• compact monoidal categories
• cartesian categories
• closed symmetric monoidal categories
• compact braided monoidal categories
• cartesian closed categories
• compact symmetric monoidal categories

Questions

  1. Are there any other major categories that a computer scientist should know about?

  2. Were there any that have fallen out of favor? If yes, why? Have they been replaced with other categories?

  3. When thinking about categories should I focus on the arrows first before the objects?

by Guy Coder at September 16, 2014 01:20 PM

Lobsters

StackOverflow

Check Curent Version of Scala inside Dos Command Prompt

Is there any way to find current version of Scala that I installed from command prompt?

As you know in command prompt Java -version gives us current version of Java in the system, and I am wondering if there is any command for Scala that gives me its current version.

I follow this instruction to set up Scala in Windows

  1. Download the sbt installer from here: http://scalasbt.artifactoryonline.com/scalasbt/sbt-native-packages/org/scala-sbt/sbt/0.12.4/sbt.msi
  2. Run the installer

Verify that sbt is installed correctly: open the Command Prompt and type sbt sbt-version, you should see the version number of sbt (the first time you run it, sbt will download libraries from the internet). If you have problems installing sbt, ask for help on the forums.

And the tutorial says, it download dependencies when sbt sbt-versio is ran or may be I got that wrong?

by Kick Buttowski at September 16, 2014 01:09 PM

CompsciOverflow

Solving or approximating recurrence relations for sequences of numbers

In computer science, we have often have to solve recurrence relations, that is find a closed form for a recursively defined sequence of numbers. When considering runtimes, we are often interested mainly in the sequence's asymptotic growth.

Examples are

  1. The runtime of a tail-recursive function stepping downwards to $0$ from $n$ whose body takes time $f(n)$:

    $\qquad \begin{align} T(0) &= 0 \\ T(n+1) &= T(n) + f(n) \end{align}$

  2. The Fibonacci sequence:

    $\qquad \begin{align} F_0 &= 0 \\ F_1 &= 1 \\ F_{n+2} &= F_n + F_{n+1} \end{align}$

  3. The number of Dyck words with $n$ parenthesis pairs:

    $\qquad\begin{align} C_0 &= 1 \\ C_{n+1}&=\sum_{i=0}^{n}C_i\,C_{n-i} \end{align}$

  4. The mergesort runtime recurrence on lists of length $n$:

    $\qquad \begin{align} T(1) &= T(0) = 0 \\ T(n) &= T(\lfloor n/2\rfloor) + T(\lceil n/2\rceil) + n-1 \end{align}$

What are methods to solve recurrence relations? We are looking for

  • general methods and
  • methods for a significant subclass

as well as

  • methods that yield precise solutions and
  • methods that provide (bounds on) asymptotic growth.

This is supposed to become a reference question. Please post one answer per method and provide a general description as well as an illustrative example.

by Raphael at September 16, 2014 01:05 PM

Portland Pattern Repository

StackOverflow

Functional solution for looping through nested dictionaries

I have a dictionary of which keys are strings, and its values dictionaries. The depth is defined and constant: 3. When I need to loop through it, I do the following:

for k1, v1 in d1.iteritems():
  for k2, v2 in v1.iteritems():
    for k3, v3 in v2.iteritems():
      # Do something with k1, k2, k3 and v3

I would like to know if there is a cleaner functional solution (without defining it myself), so that I could do something like this:

for k1, k2, k3, v3 in superfunction(d1):
  # Do something...

Thanks in advance.

by ikaros45 at September 16, 2014 12:57 PM

Dave Winer

StackOverflow

AMQP connection

Running RabbitMQ - rabbitmq-3.2.3_2 with pecl-amqp - 1.3.0 php55-5.5.9 on FreeBSD 9.2 amd64

all seems to work fine. But while querring we got such error from amqp

[AMQPConnectionException] Library error: a socket error occurred - Potential login failure.

Tried almost everything - downgrading AMQP to 1.0.9, deleting and adding user to RabbitMQ but still nothing, changing permissions. Anyone got same error? Or Any solutions?

by Raniel-Atero at September 16, 2014 12:54 PM

Open JDK crashing eclipse

I'm working on migrating my project from Oracle JDK to open JDK - Zulu 7.

The problem is after I point my eclipse to the Open JDK version, the eclipse keeps crashing. Once I point the eclipse to the oracle installation the eclipse is behaving properly.

Anyway I could solve this issue and continue to point to the Open JDK version?

Cheers!!!

by mani_nz at September 16, 2014 12:40 PM

Scala 2.11.x concurrency: pool of workers doing something similar to map-reduce?

What is the idiomatic way to implement a pool of workers in Scala, such that work units coming from some source can be allocated to the next free worker and processed asynchronously? Each worker would produce a result and eventually, all the results would need to get combined to produce the overall result. We do not know the number of work units on which we need to run a worker in advance and we do not know in advance the optimal number of workers, because that will depend on the system we run on. So roughly what should happen is this:

for each work unit, eventually start a worker to process it
for each finished worker, combine its result into the global result
return the global result after all the worker results have been combined

Should this be done exclusively by futures, no matter how many work units and how many workers there will be? What if the results can only be combined when they are ALL available? Most examples of futures I have seen have a fixed number of futures and then use for comprehension to combine them, but what if the number of futures is not known and I have e.g. just a collection of an arbitrary number of futures? What if there will be billions of easier work units to get processed that way versus just a few dozen long-running ones? Are there other, better ways to do this, e.g. with Actors instead?

How would the design ideally change when the results of each worker does not need to get combined and each worker is completely independent of the others?

by Johsm at September 16, 2014 12:38 PM

Lobsters