Planet Primates

May 24, 2015

StackOverflow

How to find five first maximum indices of each column in a matrix?

This is my matrix and i want to extract five first maximum indices of each column in Spark and Scala using Breeze:

indices

  0         0.23 0.20 0.10 0.92 0.33 0.42
  1         0.10 0.43 0.23 0.15 0.22 0.12
  2         0.20 0.13 0.25 0.85 0.02 0.32
  3         0.43 0.65 0.23 0.45 0.10 0.33
  4         0.31 0.87 0.45 0.63 0.28 0.16
  5         0.12 0.84 0.33 0.45 0.56 0.83
  6         0.40 0.22 0.12 0.87 0.35 0.78
           ...

(Note : indices are not in matrix , just for showing the problem better )

and expected output is :

3 4 4 0 5 5
6 5 5 6 6 6
4 3 2 2 0 0
0 1 1 4 4 3
2 6 3 3 1 2

i've tried :

  for (i <- 0 until I) {
      val T = argmax(matrix(::, i))
      results(::,i) := T
    }

but it return only first maximum index .

Can anybody help me?

by Rozita at May 24, 2015 02:52 PM

CompsciOverflow

What is a "trap" in the context of system virtualization?

My professor's notes on system virtualization refer a lot to the term "trap", but don't seem to explicitly define it. I can't seem to find any definition on Google either, other than a small article on "trap-and-emulate". From context I sort of understand that trapping is 'catching' any operations that the virtual OS performs which it no longer has the privilege for (as it is now operating in restricted mode), and forwarding them to the hypervisor to be handled.

However could someone who has a more solid understanding of this give me a better definition?

(also I apologize for poor tagging, as there doesn't seem to be a relevant tag for virtual systems)

by Sammdahamm at May 24, 2015 02:50 PM

StackOverflow

sbt native packager doesn't create scripts under target/universal/stage/bin

I'm trying to use JavaAppPackaging from sbt-native-packager. My understanding is, that when I run:

sbt stage

I should get a directory target/universal/stage/bin with some startup scripts. Now I only get lib which contains my jar and it's dependencies.

Here's the relevant parts of my build.sbt:

val scalatra = "org.scalatra" %% "scalatra" % "2.3.1"

enablePlugins(JavaAppPackaging)

lazy val root = (project in file(".")).
  settings(
    name := "myapp",
    version := "0.2",
    scalaVersion := "2.11.6",
    libraryDependencies += scalatra
  )

Also, my plugins.sbt has this:

addSbtPlugin("com.typesafe.sbt" % "sbt-native-packager" % "1.0.0")

I'm using sbt 0.13.8.

So why don't I get the startup scripts, what am I missing?

by auramo at May 24, 2015 02:47 PM

TheoryOverflow

Split find-min data structure that finds several small elements?

The split find-min data structure is initialized with a sequence of elements $e_1,\ldots,e_n$, each associated with a key. The data structure supports three operations:

(1) $Split(e_i)$ that splits the sequence at position $e_i$.

(2) $FindMin(e_i)$ that returns the minimum in the interval that contains $e_i$.

(3) $DecreaseKey(e_i,k)$ that decreases the key of $e_i$ by $k$.

For instance, you may have started with $[5,2,1,4]$, where the min is $1$, then split at $2$ to get $[5][2,1,4]$, so the min of the first interval is $5$ and the min of the second interval is $1$.

Seth Pettie gave an implementation of this data structure that makes at most $O(m\log\alpha(m,n))$ comparisons when $m$ is the number of $DecreaseKey$ operations and $n$ is the number of elements ($\alpha$ is the inverse Ackermann function). For more details see the paper:

Sensitivity Analysis of Minimum Spanning Trees in Sub-Inverse-Ackermann Time http://arxiv.org/abs/1407.1910

My question is: Suppose that you want to support queries not about which is the min element in each interval, but about which are the $l$ smallest elements in each interval, for a parameter $l\gg 1$ (note that you don't need to know the order among the $l$ smallest elements, only what they are). How many comparisons do you need for that?

by Dana Moshkovitz at May 24, 2015 02:46 PM

QuantOverflow

Black Scholes Model and Dividends

My question can be summarised as such:

  • Consider a portfolio. Say it has a price $\Pi = x$.
  • Portfolio consists of a stock and a sequence of call options underlying on the stock.
  • It has been announced that a dividend will be paid in half year. However, assume that the stock price does not change today.
  • How will the value of the portfolio change today?

My argument:

  1. If the stock price does not change today due to announcement, then we can assume the dividend is already priced into the stock value.
  2. In order to use the Black-Scholes-Merton option pricing model, the underlying stock price must only consist of a risky component, and not the certain dividend component as it must be assumed that stock prices follow a geometric Brownian motion.
  3. Since the stock price used for the model decreases (subtracting the present value of the dividend in half year), and the delta of the portfolio is positive, the value of the portfolio must decrease.

Where is the flaw in my argument (if there is one) ?

by Gustavo Montano at May 24, 2015 02:43 PM

StackOverflow

Scalaj-Http: Using HttpRequest and getting back HttpResponse[string] - How can i access individual JSON element of Respnse.Body?

I want to count the language tags in Github repositories. I am using scalaj-http for that. (https://github.com/scalaj/scalaj-http)

val response: HttpResponse[String] = Http(" https://api.github.com/search/repositories?q=size:>=0").asString
val b = response.body,

val c = response.code,

val h = response.headers

I get back following:
b: String
c: Int
h: Map[String,String]

Body is returned as string. Now i want to know iterate over this body result to extract and further call few nested URL (you might get better idea of this if you see GET result of URL mentioned above). Basically i want to call one of the URLs. How can i do this?

by Sahil Sharma at May 24, 2015 02:41 PM

Slick 3.0.0 Select and Create or Update

I'am in a situation where in I have to do a select first, use the value to issue a create. It is some versioning that I'm trying to implement. Here is the table definition:

  class Table1(tag: Tag) extends Table[(Int, String, Int)](tag, "TABLE1") {
    def id = column[Int]("ID")
    def name = column[String]("NAME")
    def version = column[Int]("VERSION")

    def indexCol = index("_a", (id, version))

    val tbl1Elems = TableQuery[Table1]
  }

So when a request comes to create or update an entry in Table1, I have to do the following:

1. Select for the given id, if exists, get the version
2. Increment the version
3. Create a new entry

All that should happen in a single transaction. Here is what I have got so far:

  // this entry should be first checked if the id exists and if yes get //the complete set of columns by applying a filter that returns the max //version
  val table1 = Table1(2, "some name", 1)
  for {
    tbl1: Table1 <- tbl1MaxVersionFilter(table1.id)
    maxVersion: Column[Int] = tbl1.version
    result <- tbl1Elems += table1.copy(version = maxVersion + 1) // can't use this!!!
  } yield result

I will later wrap that entire block in one transaction. But I',m wondering how to complete that will create a new version? How can I get the value maxVersion out of the Column so that I can increment 1 to it and use it?

by user3102968 at May 24, 2015 02:41 PM

CompsciOverflow

Sliding window protocol, calculation of sequence number bits

I am preparing for my exams and was solving problems regarding Sliding Window Protocol and I came across these questions..

A 1000km long cable operates a 1MBPS. Propagation delay is 10 microsec/km. If frame size is 1kB, then how many bits are required for sequence number?

A) 3 B) 4 C) 5 D) 6

I got the ans as C option as follows,

propagation time is 10 microsec/km
so, for 1000 km it is 10*1000 microsec, ie 10 milisec
then RTT will be 20 milisec 

in 10^3 milisec 8*10^6 bits
so, in 20 milisec X bits;

X = 20*(8*10^6)/10^3 = 160*10^3 bits

now, 1 frame is of size 1kB ie 8000 bits
so total number of frames will be 20. this will be a window size.

hence, to represent 20 frames uniquely we need 5 bits.

the ans was correct as per the answer key.. and then I came across this one..

Frames of 1000 bits are sent over a 10^6 bps duplex link between two hosts. The propagation time is 25ms. Frames are to be transmitted into this link to maximally pack them in transit (within the link).

What is the minimum number of bits (l) that will be required to represent the sequence numbers distinctly? Assume that no time gap needs to be given between transmission of two frames.

(A) l=2 (B) l=3 (C) l=4 (D) l=5

as per the earlier one I solved this one like follows,

propagation time is 25 ms
then RTT will be 50 ms 

in 10^3 ms 10^6 bits
so, in 50 ms X bits;

X = 50*(10^6)/10^3 = 50*10^3 bits

now, 1 frame is of size 1kb ie 1000 bits
so total number of frames will be 50. this will be a window size.

hence, to represent 50 frames uniquely we need 6 bits.

and 6 is not even in the option. Answer key is using same solution but taking propagation time not RTT. and their answer is 5 bits. I am totally confused, which one is correct?

by Karthik at May 24, 2015 02:39 PM

StackOverflow

Computational Complexity of Higher Order Functions?

Map and filter seem like they would be linear O(n) because they only have to traverse a list once, but is their complexity affected by the function being passed? For example are the two examples below of the same order?

map (+) list

map (complex_function) list

by Dale Matthews at May 24, 2015 02:38 PM

select's field function of korma does not reject colums?

I play around with clojure and its korma library using an sqlite3 database on windows. I follow an example of the 7web book. It introduces the select* function and its friends.

But using the fields function adds fields instead of limiting.

test=> (-> (select* issue)
  #_=>     (fields :title)
  #_=>     (as-sql))
"SELECT \"issue\".\"id\", \"issue\".\"project_id\", \"issue\".\"title\", \"issue\".\"description\", \"issue\".\"status\", \"issue\".\"title\" FROM \"issue\""

Did i miss anything?

by sschmeck at May 24, 2015 02:27 PM

QuantOverflow

How to calculate the JdK RS-Ratio

Anyone have a clue how to calculate the JdK RS-Ratio?

Let's say I want to compare the Relative strength for these:

EWA iShares MSCI Australia Index Fund EWC iShares MSCI Canada Index Fund EWD iShares MSCI Sweden Index Fund EWG iShares MSCI Germany Index Fund EWH iShares MSCI Hong Kong Index Fund EWI iShares MSCI Italy Index Fund EWJ iShares MSCI Japan Index Fund EWK iShares MSCI Belgium Index Fund EWL iShares MSCI Switzerland Index Fund EWM iShares MSCI Malaysia Index Fund EWN iShares MSCI Netherlands Index Fund EWO iShares MSCI Austria Index Fund EWP iShares MSCI Spain Index Fund EWQ iShares MSCI France Index Fund EWS iShares MSCI Singapore Index Fund EWU iShares MSCI United Kingdom Index Fund EWW iShares MSCI Mexico Index Fund EWT iShares MSCI Taiwan Index Fund EWY iShares MSCI South Korea Index Fund EWZ iShares MSCI Brazil Index Fund EZA iShares MSCI South Africa Index Fund

Each of them should be compared to the SP500 (SPY index). Calculate the relative strength of each of them to SPY and have it normalized (I think it is the only solution)

More info on the concept. http://www.mta.org/eweb/docs/pdfs/11symp-dekempanaer.pdf

enter image description here

enter image description here

by Donedge at May 24, 2015 02:21 PM

StackOverflow

How Scala Playframework 2 unit test in memory database create and clean up

I'm new in Play with Scala. I have relational database. I would like to write unit test. I want to use H2 in memory database. I have database DDl script to generate all relational tables in database and to populate table with test data.

I want to drop table or database to clean up whole database for each unit test. How can I create database, populate database and drop database in my unit test. I'm open to use any unit test framework.

Please suggest me how can achieve this goal? Thanks in advance!

by masiboo at May 24, 2015 02:20 PM

CompsciOverflow

Average vs Worst-Case Hitting Time

Consider a simple random walk on an undirected graph and let $H_{ij}$ be the hitting time from $i$ to $j$. How much bigger can $$ H_{\rm max} = \max_{i,j} H_{ij}, $$ be compared to $$ H_{\rm ave} = \frac{1}{n^2} \sum_{i=1}^n \sum_{j=1}^n H_{ij}.$$ For all the examples I can think of, these two quantities are of roughly the same order of magnitude.

To make this into a formal question, define $$\phi(n) = \max_{\mbox{undirected graphs with } n \mbox{ nodes }} \frac{H_{\rm max}}{H_{\rm ave}}.$$ How fast does $\phi(n)$ grow with $n$?

by Pramod T. at May 24, 2015 02:02 PM

StackOverflow

Contextual eval in clojure

Here is an example from joy of clojure chapter 8:

(defn contextual-eval [ctx expr]
  (let [new-expr
        `(let [~@(mapcat (fn [[k v]]
                           [k `'~v])
                         ctx)]
           ~expr)]
    (pprint new-expr)
    (eval new-expr)))
(pprint (contextual-eval '{a 1 b 2} '(+ a b)))

I find the ``'` part quite perplexing, what's it for?

I also tried to modify the function a bit:

(defn contextual-eval [ctx expr]
  (let [new-expr
        `(let [~@(mapcat (fn [[k v]]
                           [k `~v])
                         ctx)]
           ~expr)]
    (pprint new-expr)
    (eval new-expr)))
(pprint (contextual-eval '{a 1 b 2} '(+ a b)))


(defn contextual-eval [ctx expr]
  (let [new-expr
        `(let [~@(vec (apply 
                        concat 
                        ctx))]
           ~expr)]
    (pprint new-expr)
    (eval new-expr)))
(pprint (contextual-eval '{a 1 b 2} '(+ a b)))

All the versions above have similar effect. Why did the author choose to use `' then?

A more detailed look:

(use 'clojure.pprint)
(defmacro epprint [expr]
  `(do
     (print "==>")
     (pprint '~expr)
     (pprint ~expr)))
(defmacro epprints [& exprs]
  (list* 'do (map (fn [x] (list 'epprint x))
                  exprs)))

(defn contextual-eval [ctx expr]
  (let [new-expr
        `(let [~@(mapcat (fn [[k v]]
                           (epprints
                             (class v)
                             v
                             (class '~v)
                             '~v
                             (class `'~v)
                             `'~v
                             (class ctx)
                             ctx)
                           [k `~v])
                         ctx)]
           ~expr)]
    (pprint new-expr)
    (eval new-expr)))
(pprint (contextual-eval '{a (* 2 3) b (inc 11)} '(+ a b)))

This prints out the following in the repl:

==>(class v)
clojure.lang.PersistentList
==>v
(* 2 3)
==>(class '~v)
clojure.lang.PersistentList
==>'~v
~v
==>(class
 (clojure.core/seq
  (clojure.core/concat
   (clojure.core/list 'quote)
   (clojure.core/list v))))
clojure.lang.Cons
==>(clojure.core/seq
 (clojure.core/concat (clojure.core/list 'quote) (clojure.core/list v)))
'(* 2 3)
==>(class ctx)
clojure.lang.PersistentArrayMap
==>ctx
{a (* 2 3), b (inc 11)}
==>(class v)
clojure.lang.PersistentList
==>v
(inc 11)
==>(class '~v)
clojure.lang.PersistentList
==>'~v
~v
==>(class
 (clojure.core/seq
  (clojure.core/concat
   (clojure.core/list 'quote)
   (clojure.core/list v))))
clojure.lang.Cons
==>(clojure.core/seq
 (clojure.core/concat (clojure.core/list 'quote) (clojure.core/list v)))
'(inc 11)
==>(class ctx)
clojure.lang.PersistentArrayMap
==>ctx
{a (* 2 3), b (inc 11)}
==>new-expr
(clojure.core/let [a (* 2 3) b (inc 11)] (+ a b))
18

Again, using a single syntax quote for v seems to get the job done.

In fact, using `'v might cause you some trouble:

(defn contextual-eval [ctx expr]
  (let [new-expr
        `(let [~@(mapcat (fn [[k v]]
                           [k `'~v])
                         ctx)]
           ~expr)]
    (pprint new-expr)
    (eval new-expr)))
(pprint (contextual-eval '{a (inc 3) b (* 3 4)} '(+ a b)))

CompilerException java.lang.ClassCastException: clojure.lang.PersistentList cannot be cast to java.lang.Number, compiling:(/Users/kaiyin/personal_config_bin_files/workspace/typedclj/src/typedclj/macros.clj:14:22) 

by qed at May 24, 2015 01:55 PM

How do I flatten a sequence of sequences of maps into a sequence of vectors?

I'm trying to build a POS tagger in Clojure. I need to iterate over a file and build out feature vectors. The input is (text pos chunk) triples from a file like the following:

input from the file:  
        I PP B-NP
        am VBP B-VB
        groot NN B-NP

I've written functions to input the file, transform each line into a map, and then slide over a variable amount of the data.

(defn lazy-file-lines
  "open a file and make it a lazy sequence."
  [filename]
  (letfn [(helper [rdr]
        (lazy-seq
         (if-let [line (.readLine rdr)]
           (cons line (helper rdr))
           (do (.close rdr) nil))))]
(helper (clojure.java.io/reader filename))))

(defn to-map
  "take a a line from a file and make it a map."
  [lines]
  (map
  #(zipmap [:text :pos :chunk] (clojure.string/split (apply str %) #" "))lines)
  )  

(defn window
  "create windows around the target word."
  [size filelines]
  (partition size 1 [] filelines))

I plan to use the above functions in the following way:

 (take 2 (window 3(to-map(lazy-file-lines "/path/to/train.txt"))))

which gives the following output for the first two entries in the sequence:

(({:chunk B-NP, :pos NN, :text Confidence} {:chunk B-PP, :pos IN, :text in} {:chunk B-NP, :pos DT, :text the}) ({:chunk B-PP, :pos IN, :text in} {:chunk B-NP, :pos DT, :text the} {:chunk I-NP, :pos NN, :text pound}))   

Given each sequence of maps within the sequence, I want to extract :pos and :text for each map and put them in one vector. Like so:

[Confidence in the NN IN DT]
[in the pound IN DT NN]

I've not been able to conceptualize how to handle this in clojure. My partial attempted solution is below:

(defn create-features
  "creates the features and tags from the datafile."
  [filename windowsize  & features]
 (map  #(apply select-keys % [:text :pos])
   (->>
    (lazy-file-lines filename)
    (window windowsize))))   

I think one of the issues is that apply is referencing a sequence itself, so select-keys isn't operating on a map. I'm not sure how to nest another apply function into this, though.

Any thoughts on this code would be great. Thanks.

by mtbarta at May 24, 2015 01:41 PM

Sbt 0.13 ScriptEngine is Null for getEngineByName(“JavaScript”)

When I run tests which use getEngineByName("JavaScript") in sbt 0.13 the method returns null. The safe code works fine with sbt 0.12.x.

Tried on different environments: Windows 7 and Mac - same problem.

I tried to manually set javaHome in sbt.

test:dependencyClasspath contains .ivy2/cache/rhino/js/jars/js-1.6R7.jar

Any idea what's wrong?

by nau at May 24, 2015 01:41 PM

DragonFly BSD Digest

Lazy Reading for 2015/05/24

I guess the accidental theme this week is Unix.

Your unrelated link of the week: svblm.  Found via a link to Infinideer and Forest Ambassador.

by Justin Sherrill at May 24, 2015 01:35 PM

StackOverflow

Why do we need the From type parameter in Scala's CanBuildFrom

I am experimenting with a set of custom container functions and took inspiration from Scala's collections library with regard to the CanBuildFrom[-From, -Elem, -To] implicit parameter.

In Scala's collections, CanBuildFrom enables parametrization of the return type of functions like map. Internally the CanBuildFrom parameter is used like a factory: map applies it's first order function on it's input elements, feeds the result of each application into the CanBuildFrom parameter, and finally calls CanBuildFrom's result method which builds the final result of the map call.

In my understanding the -From type parameter of CanBuildFrom[-From, -Elem, -To] is only used in apply(from: From): Builder[Elem, To] which creates a Builder preinitialized with some elements. In my custom container functions I don't have that use case, so I created my own CanBuildFrom variant Factory[-Elem, +Target].

Now I can have a trait Mappable

trait Factory[-Elem, +Target] {
  def apply(elems: Traversable[Elem]): Target
  def apply(elem: Elem): Target = apply(Traversable(elem))
}

trait Mappable[A, Repr] {
  def traversable: Traversable[A]

  def map[B, That](f: A => B)(implicit factory: Factory[B, That]): That =
    factory(traversable.map(f))
}

and an implementation Container

object Container {
  implicit def elementFactory[A] = new Factory[A, Container[A]] {
    def apply(elems: Traversable[A]) = new Container(elems.toSeq)
  }
}

class Container[A](val elements: Seq[A]) extends Mappable[A, Container[A]] {
  def traversable = elements
}

Unfortunately though, a call to map

object TestApp extends App {
  val c = new Container(Seq(1, 2, 3, 4, 5))
  val mapped = c.map { x => 2 * x }
}

yields an error message not enough arguments for method map: (implicit factory: tests.Factory[Int,That])That. Unspecified value parameter factory . The error goes away when I add an explicit import import Container._ or when I explicitly specify the expected return type val mapped: Container[Int] = c.map { x => 2 * x }

All of these "workarounds" become unnecessary when I add an unused third type parameter to Factory

trait Factory[-Source, -Elem, +Target] {
  def apply(elems: Traversable[Elem]): Target
  def apply(elem: Elem): Target = apply(Traversable(elem))
}

trait Mappable[A, Repr] {
  def traversable: Traversable[A]

  def map[B, That](f: A => B)(implicit factory: Factory[Repr, B, That]): That =
    factory(traversable.map(f))
}

and change the implicit definition in Container

object Container {
  implicit def elementFactory[A, B] = new Factory[Container[A], B, Container[B]] {
    def apply(elems: Traversable[A]) = new Container(elems.toSeq)
  }
}

So finally my question: Why is the seemingly unused -Source type parameter necessary to resolve the implicit?

And as an additional meta question: How do you solve these kinds of problems if you don't have a working implementation (the collection library) as a template?

Clarification

It might be helpful to explain why I thought the implicit resolution should work without the additional -Source type parameter.

According to this doc entry on implicit resolution, implicits are looked up in companion objects of types. The author does not provide an example for implicit parameters from companion objects (he only explains implicit conversion) but I think what this means is that a call to Container[A]#map should make all implicits from object Container available as implicit parameters, including my elementFactory. This assumption is supported by the fact that it is enough to provide the target type (no additional explicit imports!!) to get the implicit resolved.

by jmaschad at May 24, 2015 01:30 PM

Dependent futures

Starting playing with Scala futures, I get stuck with dependent futures.

Let's get a example. I search for places and get a Future[Seq[Place]]. For each of theses places, I search for the closest subway stations (the service resurns a Future[List[Station]]).

I would write this:

Place.get()
.map { places =>
    places.map { place =>
        Station.closestFrom(place).map { stations =>
            SearchResult(place, stations)
        }
    }
}

That thing will make me get a Future[Seq[Future[SearchResult]]]... which is... not what I would have expected.

What did I miss to get a Future[Seq[SearchResult]] ?

Thanks for all,

Alban

by Alban at May 24, 2015 01:17 PM

NoClassDefFoundError with Pantomime when trying to run pantomime.extract/parse

I am new to Clojure and working on a project where I am trying to extract text from web pages using Pantomime. I am managing the project with Leiningen and editing using Eclipse / CCW. When I try to use the pantomime.extract/extract function, I get the following error:

Exception in thread "main" java.lang.NoClassDefFoundError: Could not initialize class org.apache.tika.parser.pkg.PackageParser, compiling:(/tmp/form-init7461469090551574085.clj:1:72)
    at clojure.lang.Compiler.load(Compiler.java:7142)
    at clojure.lang.Compiler.loadFile(Compiler.java:7086)
    at clojure.main$load_script.invoke(main.clj:274)
    at clojure.main$init_opt.invoke(main.clj:279)
    at clojure.main$initialize.invoke(main.clj:307)
    at clojure.main$null_opt.invoke(main.clj:342)
    at clojure.main$main.doInvoke(main.clj:420)
    at clojure.lang.RestFn.invoke(RestFn.java:421)
    at clojure.lang.Var.invoke(Var.java:383)
    at clojure.lang.AFn.applyToHelper(AFn.java:156)
    at clojure.lang.Var.applyTo(Var.java:700)
    at clojure.main.main(main.java:37)
Caused by: java.lang.NoClassDefFoundError: Could not initialize class org.apache.tika.parser.pkg.PackageParser
    at org.apache.tika.parser.pkg.ZipContainerDetector.detect(ZipContainerDetector.java:86)
    at org.apache.tika.detect.CompositeDetector.detect(CompositeDetector.java:61)
    at org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:113)
    at pantomime.extract$eval1104$fn__1105.invoke(extract.clj:29)
    at pantomime.extract$eval1087$fn__1088$G__1078__1093.invoke(extract.clj:18)
    at pantomime.extract$eval1116$fn__1117.invoke(extract.clj:53)
    at pantomime.extract$eval1087$fn__1088$G__1078__1093.invoke(extract.clj:18)
    at com.scrape$extract_text.invoke(scrape.clj:26)
    at com.scrape$get_words.invoke(scrape.clj:29)
    at com.sis$main.invoke(sis.clj:6)
    at clojure.lang.Var.invoke(Var.java:375)
    at user$eval5.invoke(form-init7461469090551574085.clj:1)
    at clojure.lang.Compiler.eval(Compiler.java:6703)
    at clojure.lang.Compiler.eval(Compiler.java:6693)
    at clojure.lang.Compiler.load(Compiler.java:7130)
    ... 11 more
ABRT problem creation: 'success'

I made sure to include the appropriate dependency line in my project.clj:

  [com.novemberain/pantomime "2.6.0"]

I also made sure that I am requiring the pantomime.extract namespace in my namespace:

(ns com.scrape
  (:require  [pantomime.extract :as extract]))

Here is the function that is calling "extract":

(defn extract-text [url]
  (:text (extract/parse (java.net.URL. url))))

I have tried running "lein clean" and "lein deps". I've also deleted the directory where leiningen stores dependencies (~/.m2) and allowed lein to automatically re-download all the appropriate jar files. Still, whether I am running an REPL from the command line with "lein run" or from Eclipse, I always get the above error.

Why am I getting this error, and how can I fix it?

UPDATE

I tried to recreate this issue in a new project with as little code as possible in order to post the full source here; however, in a new lein project, I was able to copy all my code from my original project; and I am not getting errors anymore.

Any idea what might have happened? Some glitch with leiningen?

by qdet at May 24, 2015 01:09 PM

QuantOverflow

Is R being replaced by Python at quant desks?

I know the title sounds a little extreme but I wonder whether R is phased out by a lot of quant desks at sell side banks as well as hedge funds in favor of Python. I get the impression that with improvements in Pandas and other Python packages functionality in Python is drastically improving in order to meaningfully mine data and model time series. I have also seen quite impressive implementations through Python to parallelize code and fan out computations to several servers/machines. I know some packages in R are capable of that too but I just sense that the current momentum clearly favors Python.

I need to make a decision regarding architecture of a subset of my modeling framework myself and need some input what the current sentiment is by other quants.

I also have to admit that my initial reservations regarding performance via Python are mostly outdated because some of the packages make heavy use of C implementations under the hood and I have seen implementations that clearly outperform even efficiently written, compiled OOP language code.

Can you please comment on what you are using? I am not asking for opinions whether you think one is better or worse for below tasks but specifically why you use R or Python and whether you even place them in the same category to accomplish, among others, the following tasks:

  • acquire, store, maintain, read, clean time series
  • perform basic statistics on time series, advanced statistical models such as multivariate regression analyses,...
  • performing mathematical computations (fourier transforms, PDE solver, PCA, ...)
  • visualization of data (static and dynamic)
  • pricing derivatives (application of pricing models such as interest rate models)
  • interconnectivity (with Excel, servers, UI, ...)

by Matt Wolf at May 24, 2015 12:52 PM

/r/netsec

QuantOverflow

Obtaining intra-day values of the EUR-USD exchange

I need for my project the values of the EUR-USD exchange (both intra-day and ticker). I've been playing around with the Yahoo's YQL API and at this moment I can obtain the current value of the exchange. But no idea about how to get the intra-day values.

Any suggestions?

by gyss at May 24, 2015 12:01 PM

Fred Wilson

Rinse And Repeat

I’d like to call out a really great blog post (and talk) my colleague Nick Grossman delivered last week. He called it Venture Capital vs Community Capital, but to me its about the endless cycle of domination and disruption that plays out in the tech sector. This bit from the post rings so true to me:

So there’s the pattern: tech companies build dominant market positions, then open technologies emerge which erode the the tech companies’ lock on power (this is sometimes an organized rebellion against this corporate power, and is sometime more of a happy accident).  These open technologies then in turn become the platform upon which the next generation of venture-backed companies is built.  And so on and so on; rinse and repeat.

So, all that is to say: this is not a new thing.  And that seeing this as part of a pattern can help us understand what to make of it, and where the next opportunities could emerge.

Nick wrote the post and did the presentation for the OuiShareFest, an international gathering of folks interested in the peer economy. Nick starts out noting that the early enthusiasm for the peer economy has moderated with the understanding that a few large platforms have emerged and have come to dominate the sector.

Nick’s presentation and post, therefore, was a reaction to those emotions and a reminder that what goes around comes around eventually. That is certainly what I have observed in the thirty plus years I’ve been working in tech. Rinse and repeat. Same as it ever was.

by Fred Wilson at May 24, 2015 11:58 AM

QuantOverflow

pdf of simple equation, compound Poisson noise

I would like to find the probability density function (at stationarity) of the random variable $X_t$, where: \begin{equation*} dX_t = -aX_t + d N_t, \end{equation*} $a$ is a constant and $N_t$ is a compound Poisson process with Poisson jump size distribution.

In other words, $X_t$ solves the ordinary differential equation $\frac{d X_t}{dt} + a X_t=0$, but at times $t_i$ say, where the $t_i$ are exponentially distributed with mean $1/k$, $X_t$ increases by an integer drawn from $M\sim Poi(m)$ (i.e. $X_t$ gets a Poisson-distributed "kick" upwards at exponentially distributed intervals).

Is there a way of obtaining the pdf for this random variable $X$? If I have understood things correctly, the Kramers-Moyal equation for the pdf of $X$ is of infinite order because it is a jump Markov process. I have also tried looking at the Master Equation but I get lost. However, I am new to this literature and was wondering if the solution is easy for those in the know, since it is such a simple system.

Many thanks for your help!

by stochastic_newbie at May 24, 2015 11:45 AM

StackOverflow

';' expected but 'import' found - Scala and Spark

I'm trying to work with Spark and Scala, compiling a standalone application. I don't know why I'm getting this error:

topicModel.scala:2: ';' expected but 'import' found.
[error] import org.apache.spark.mllib.clustering.LDA
[error] ^
[error] one error found
[error] (compile:compileIncremental) Compilation failed

This is the build.sbt code:

name := "topicModel"

version := "1.0"

scalaVersion := "2.11.6"

libraryDependencies += "org.apache.spark" %% "spark-core" % "1.3.1"
libraryDependencies += "org.apache.spark" %% "spark-graphx" % "1.3.1"
libraryDependencies += "org.apache.spark" %% "spark-mllib" % "1.3.1"

And those are the imports:

import scala.collection.mutable
import org.apache.spark.mllib.clustering.LDA
import org.apache.spark.mllib.linalg.{Vector, Vectors}
import org.apache.spark.rdd.RDD

object Simple {
  def main(args: Array[String]) {

by Alessio Rossotti at May 24, 2015 11:39 AM

Reading sql file using getResources in scala

I'm trying to read and execute a sql in SPARK SQL.

sqlContext.sql(scala.io.Source.fromInputStream(getClass.getResourceAsStream("/" + "dq.sql")).getLines.mkString(" ").stripMargin).take(1)

My sql is very long. When I run it straight way in spark shell , it runs fine. When I try to read this using getResourcesAsStream - I'm hitting

java.lang.RuntimeException: [1.10930] failure: end of input

by user3279189 at May 24, 2015 11:39 AM

Akka message passing timing

I am working on an artificial life simulation with Scala and Akka and so far I've been super happy with both. I am having some issues with timing however that I can't quite explain.

At the moment, each animal in my simulation is a pair of actors (animal + brain). Typically, these two actors take turns (animal sends sensor input to brain, waits for result, acts on it and starts over). Every now and then however, animals need to interact with each other to eat each other or reproduce.

The one thing that is odd to me is the timing. It turns out that sending a message from one animal to another is a LOT slower (about 100x) than sending from animal to brain. This puts my poor predators and sexually active animals at a disadvantage as opposed to the vegetarians and asexual creatures (disclaimer: I am vegetarian myself but I think there are better reasons for being a vegetarian than getting stuck for a bit while trying to hunt...).

I extracted a minimal code snippet that demonstrates the problem:

package edu.blindworld.test

import java.util.concurrent.TimeUnit

import akka.actor.{ActorRef, ActorSystem, Props, Actor}
import akka.pattern.ask
import akka.util.Timeout

import scala.concurrent.Await
import scala.concurrent.duration.Duration
import scala.util.Random

class Animal extends Actor {
  val brain = context.actorOf(Props(classOf[Brain]))
  var animals: Option[List[ActorRef]] = None

  var brainCount = 0
  var brainRequestStartTime = 0L
  var brainNanos = 0L

  var peerCount = 0
  var peerRequestStartTime = 0L
  var peerNanos = 0L

  override def receive = {
    case Go(all) =>
      animals = Some(all)
      performLoop()
    case BrainResponse =>
      brainNanos += (System.nanoTime() - brainRequestStartTime)
      brainCount += 1
      // Animal interactions are rare
      if (Random.nextDouble() < 0.01) {
        // Send a ping to a random other one (or ourselves). Defer our own loop
        val randomOther = animals.get(Random.nextInt(animals.get.length))
        peerRequestStartTime = System.nanoTime()
        randomOther ! PeerRequest
      } else {
        performLoop()
      }
    case PeerResponse =>
      peerNanos += (System.nanoTime() - peerRequestStartTime)
      peerCount += 1
      performLoop()
    case PeerRequest =>
      sender() ! PeerResponse
    case Stop =>
      sender() ! StopResult(brainCount, brainNanos, peerCount, peerNanos)
      context.stop(brain)
      context.stop(self)
  }

  def performLoop() = {
    brain ! BrainRequest
    brainRequestStartTime = System.nanoTime()
  }
}

class Brain extends Actor {
  override def receive = {
    case BrainRequest =>
      sender() ! BrainResponse
  }
}

case class Go(animals: List[ActorRef])
case object Stop
case class StopResult(brainCount: Int, brainNanos: Long, peerCount: Int, peerNanos: Long)

case object BrainRequest
case object BrainResponse

case object PeerRequest
case object PeerResponse

object ActorTest extends App {
  println("Sampling...")
  val system = ActorSystem("Test")
  val animals = (0 until 50).map(i => system.actorOf(Props(classOf[Animal]))).toList
  animals.foreach(_ ! Go(animals))
  Thread.sleep(5000)
  implicit val timeout = Timeout(5, TimeUnit.SECONDS)
  val futureStats = animals.map(_.ask(Stop).mapTo[StopResult])
  val stats = futureStats.map(Await.result(_, Duration(5, TimeUnit.SECONDS)))
  val brainCount = stats.foldLeft(0)(_ + _.brainCount)
  val brainNanos = stats.foldLeft(0L)(_ + _.brainNanos)
  val peerCount = stats.foldLeft(0)(_ + _.peerCount)
  val peerNanos = stats.foldLeft(0L)(_ + _.peerNanos)
  println("Average time for brain request: " + (brainNanos / brainCount) / 1000000.0 + "ms (sampled from " + brainCount + " requests)")
  println("Average time for peer pings: " + (peerNanos / peerCount) / 1000000.0 + "ms (sampled from " + peerCount + " requests)")
  system.shutdown()
}

This is what happens here:

  • I am creating 50 pairs of animal/brain actors
  • They are all launched and run for 5 seconds
  • Each animal does an infinite loop, taking turns with its brain
  • In 1% of all runs, an animal sends a ping to a random other animal and waits for its reply. Then, it continues its loop with its brain
  • Each request to the brain and to peer is measured, so that we can get an average
  • After 5 seconds, everything is stopped and the timings for brain-requests and pings to peers are compared

On my dual core i7 I am seeing these numbers:

Average time for brain request: 0.004708ms (sampled from 21073859 requests)

Average time for peer pings: 0.66866ms (sampled from 211167 requests)

So pings to peers are 165x slower than requests to brains. I've been trying lots of things to fix this (e.g. priority mailboxes and warming up the JIT), but haven't been able to figure out what's going on. Does anyone have an idea?

by Daniel Lehmann at May 24, 2015 11:20 AM

Advogato

Dr. Xi xiaoxing needs your support

Do you believe in Emerson's conviction that Americans sometimes fail to see the best of our own people and in their own best hours and in their own best thoughts?

Ever since 911 and wars we waged since then, terror reigned supreme.  It has obscured our focus on what America really stands for to the world.  Too many times, an indictment turns into a declaration of war between race, sex and whatever one can name it for the worst in us.

Today, this particular reflection is prompted by Temple Univ.  Physics Dept. Chair, Dr. Xi xiaoxing being accused of 4 counts of wire fraud and notice of forfeiture in an indictment unsealed in eastern district court of Pennsylvania of US of A this Thursday.

May 24, 2015 10:57 AM

StackOverflow

How to parse a trait and objects to JSON in Scala Play Framework

Currently I am working a web application using the Play Framework and now I am working on a JSON api. Unfortunately I have problems with parsing my objects to JSON with the built in JSON library. We have the following trait, which defines the type of Shipment and which parser to use. And a case class which has a ShipmentType so we know which parser to user for each type. And there is a method which returns all stored shipments as a list.

trait ShipmentType {

  def parser(list: List[String]): ShipmentTypeParser

}

object ShipmentTypeA extends ShipmentType {

  def parser(list: List[String]) = new ShipmentTypeAParser(list)

}

object ShipmentTypeB extends ShipmentType {

  def parser(list: List[String]) = new ShipmentTypeBParser(list)

}

object ShipmentTypeC extends ShipmentType {

  def parser(list: List[String]) = new ShipmentTypeCParser(list)

}

case class Shipment(id: Long, name: String, date: Date, shipmentType: Type)

To write this JSON I use the following implicit val:

import play.api.libs.json._
import play.api.libs.functional.syntax._

def findAll = Action {
    Ok(Json.toJson(Shipments.all))
}
implicit val shipmentWrites: Writes[Shipment] = (
    (JsPath \ "id").write[Option[Long]] and
    (JsPath \ "name").write[String] and
    (JsPath \ "date").write[Date] and
    (JsPath \ "shipmentType").write[ShipmentType]
)(unlift(Shipment.unapply))

Next we need an extra one for the ShipmentType:

implicit val shipmentTypeWriter: Writes[ShipmentType] = ()

But there is where I get stuck, I cannot seem to find a way how to define the writer for the ShipmentType.

I also tried defining them as follows according to another page of the Play Framework Documentation:

implicit val shipmentWrites: Writes[Shipment] = Json.writes[Shipment]
implicit val shipmentTypeWrites: Writes[ShipmentType] =Json.writes[ShipmentType]

However this fails too, as I get errors like: "No unapply function found". Anyone an idee how to implement a Writer for this? Preferably in the form of a string in json.

by Willem Jan Glerum at May 24, 2015 10:53 AM

Deserialization case class from ByteString

I send case class using:

tcpActor ! Tcp.Write(MyCaseClass(arg1: Class1, arg2: Class2).data)

Then I received:

case Tcp.Receive(data: ByteString)

Is there any simple way to match data on MyCaseClass without using low level java serializer?

by galvanize at May 24, 2015 10:18 AM

Scala code quality metrics

Could you please share your experience about code quality metrics for scala codebase.

We have SonarQube and it works fine for java projects: metrics' collecting and analysis. But, i've found that Sonar scala plugins are outdated.

So, what do you use?

by Orest Ivasiv at May 24, 2015 10:17 AM

Pattern matching ParseResult in unit test

I'm stepping through my first Scala project, and looking at parser combinators in particular. I'm having trouble getting a simple unit test scenario to work, and trying to understand what I'm missing.

I'm stuck on pattern matching a ParseResult into the case classes of Success, Failure and Error. I can't get Scala to resolve the case classes. There's a few examples around of this, but they all seem to be using them inside something that extends of of the parser classes. For example the tests on github are inside the same package. The example here is inside a class extending a parser.

The test i'm trying to write looks like:

package test.parsertests

import parser.InputParser // my sut   
import scala.util.parsing.combinator._

import org.scalatest.FunSuite   
class SetSuite extends FunSuite {

  val sut = new InputParser()

  test("Parsing a valid command") {
    val result = sut.applyParser(sut.commandParser, "SOME VALID INPUT")
    result match {
       case Success(x, _) => println("Result: " + x.toString) // <-- not found: value Success
       case Failure(msg, _) => println("Failure: " + msg) // similar
       case Error(msg, _) => println("Error: " + msg) // similar
    }
  }
}

and the method I'm calling is designed to let me exersize each of my parsers on my SUT:

package parser

import scala.util.parsing.combinator._
import scala.util.parsing.combinator.syntactical._

class InputParser extends StandardTokenParsers  {

  def commandParser: Parser[Command] =
("Command " ~> coord ~ coord ~ direction) ^^ { case x ~ y ~ d => new Command(x, y, d) }

  def applyParser[T](p: Parser[T], c: String): ParseResult[T] = {
    val tokens = new lexical.Scanner(c)
    phrase(p)(tokens)

}

The fundamental issue is getting the case classes resolved in my test scope. Based on the source for the Parsers class, how can I get them defined? Can I resolve this with some additional import statements, or are they only accessible via inheritance? I've tried all the combinations that should resolve this issue, but I'm obviously missing something here.

by eddie.sholl at May 24, 2015 10:02 AM

Scala Asynchronous Database Calls

I'm currently using Slick 3.x.x which is completely asynchronous in all its calls to the database. Let's say, I have a table that has some sort of versioning. Every time I create a new entry from an already existing entry (i.e. updating a given entry), I have to make sure that I increment the version number.

How can I make sure in this asynchronous world of database communications that I can maintain data integrity? In my case with versioning, I would first do a select for the max version which would give me back a Future and then I use the result of this Future, increment 1 and issue a create command!

It could very well be possible that thread 1 started with select max version and paused for a while, thread 2 catering to a new request could run the select for max version and increment and write the new record in the database. Now thread1 comes back and tries to so the same process, but only to result in the fact that thread1 would overwrite what thread2 wrote in the database.

It could be that I might end up having multiple duplicates because the order in which multiple futures might be run might differ!

by user3102968 at May 24, 2015 09:43 AM

Advice to Implement Any Configuration Management Tool - Amazon Web Services [on hold]

We have a particularly unique situation so I will try to be as clear as I can be.

Project information in Nutshell:

In one of my project we need to implement some configuration management tool (Chef,Puppet,Ansible etc). Project in Nutshell : We're supporting our client infrastructure which is completely running on AWS. They have five products which are being used by their clients. There setup is like this : 5 AWS Accounts(Prod in VA and CA Region, Dev (CA Region), Demo(CA Region), UAT(CA Region) these all are running in EC2- Classic. All the instance are running on Amazon Linux and each instance running multiple tomcats, each tomcat is hosting their different application which their client is using(Shared environment). These are customizable application based on client requirements. For each client they request us to setup a similar environment in DEV account. On the Front end we've Apache which is redirecting the requests to the respective application server based on it's vhost entry.

For their two other products we've setup a dedicated environment in VPC. In each VPC only that specific product is running. We've separated different schemas for each client using that product. Two applications servers running multiple tomcats+ One database server.

Till now they used to give us WAR file location. We used to download the war file and do manual deployments. Now, they requested us to set-up a Jenkins and they'll provide us access to a branch so that we've do the deployment using Jenkins.

Goal:

Client wants us to implement any configuration management tool. Although,after doing the architecture review I see less possibilities of reaping the benefits of CM tool in this environment because there are only two servers connected to one database server in all the environments. Don't see frequent changes in terms of provisioning more servers apart from deploying a new tomcat and setup a new environment for the new client, setup same environment in Dev account etc.

The Question:

Does anyone suggest a CM tool that can work best in the above mentioned scenario and how can it help? Also, suggest any cheap and best CM tool.

Thank you very much for the help :)

by Varun Malhotra at May 24, 2015 09:39 AM

Compile ZeroMQ with MinGW from Qt5

In order to use nzmqt on Qt5 (Windows) I downloaded and compiled ZeroMQ 3.2.5 as described on GitHub. My Qt5 application compiles fine but it doesn't run: it complains about the entry point of libstdc++-6.dll.

I guess it's due to the different version of MinGW used to compile ZeroMQ (the one included in the RubyDevKit) and my application.

Thus, I'm trying to compile ZeroMQ with the MinGW which comes with Qt5... Unfortunately it's not enough to run mingw32-make from the Qt5 folders because it doesn't accept the "fail" command in the Makefile:

$(RECURSIVE_TARGETS):
@fail= failcom='exit 1'; \

This is beyond my knowledge. I'm wondering if anyone was able to use nzmqt with ZeroMQ 3.2.5 under Windows and Qt5.

by Mark at May 24, 2015 09:31 AM

Planet Clojure

Manipulate source code like the dom

I've always believed that most programming problems are expression problems. Being able to say what it is that we want usually gets us 95% of the way to solving the problem. The rest usually takes care of itself. Libraries and tools should help us express ourselves through higher paradigms of thinking, where the most powerful features are declarative paradigms that allow us to just say what we want and for the library to figure out how to get us there.

Source code manipulation has always been a great source of difficulty. Most source code manipulation programs are really just built upon regular expressions with some parsing and then lots of string manipulation. Lisp code, having a more regular shape tends to be easier to manipulate, but I haven't really seen any nice tools for directly dealing with source code.

Having said that, I'm extremely excited to show off a new library for source code manipulation based upon principles that lisp code is in essence a huge tree. The library is called jai and is inspired by css/xpath/jquery. I've been working on and off on this concept for about a year but it came together in the past month, having had some time off to polish off the fine grain control and the placement of the zipper at the exact location that I want it to be. The traversal code alone took 3 tries to get right. Tree-walking is super hard and I now have a real appreciation for rewrite-clj, without which, this library would have been impossible.

As lisp code follows a tree-like structure, it is very useful to have a simple language to be able to query as well as update elements of that tree. The best tool for source code manipulation is rewrite-clj. However, it is hard to reason about the higher level concepts of code when using just a zipper for traversal. jai is essentially a query/manipulation tool inspired by jquery and css selectors that make for easy dom manipulation and query. Instead of writing the following code with rewrite-clj:

(use 'rewrite-clj.zip :as z)

(if (and (-> zloc z/prev z/prev z/sexpr (= "defn"))
         (-> zloc z/prev z/sexpr vector?)
    (do-something zloc)
    zloc)

jai allows the same logic to be written in a much more expressive manner:

(use 'jai.query)

($ zloc [(defn ^:% vector? | _)] do-something)

More examples can be seen in the documentation

There have been many forerunners for of thinking about source code as data; tangible data that we can control, reason about and manipulate as we see fit. The first for me was codeq, the second being rewrite-clj. The fact that we can take source code and change it structurally with so much ease is the reason why I find the lisp paradigm so incredible. There is no way to do this in any other language bar html/xml (hence the css/xpath inspired syntax).

Another paradigm that I've been playing with is that the query should take the same shape as the actual thing that we are querying for. This is nothing revolutionary; in fact, it's kind of common-sensical. We are seeing this with mongodb, graphql but to be honest, we should be using it everywhere. We saw a huge uptake in mongodb because people saw how easy it was to create applications quickly due to the fact that there was no mental overhead of using sql. So with jai, I wanted it to feel as intuitive as possible. It makes heavy use of core.match to do some cool pattern matching behind the scenes. In the past, the trade-off between speed and expressiveness meant that being declarative can lead to tremendous losses in performance but now, it matters less and less. Of course there may be exceptions to this rule but in general we as programmers/toolmakers should act as enablers, not gatekeepers. Programming should be easy, intuitive and above all else, fun.

So please have a play =)

by Chris Zheng at May 24, 2015 09:12 AM

StackOverflow

Playframework Scala Transform Request from Json to x-www-form-urlencoded

Hi I am new to Scala and Playframework. I am getting an ajax request that is json format and I need to make another request to another server with x-www-form-urlencoded format.

I have this code in the controller

  def getToken = Action.async(parse.json) { request =>
    WS.url("https://api.xxxx.com/v1/yyyy")
      .withHeaders(
        "accept" -> "application/json",
        "content-type" -> "application/x-www-form-urlencoded",
        "Authorization" -> "Auth %s".format(apiKey)
      ).post(request.body) map { response =>
        Logger.info("response from get user: " +  Json.prettyPrint(response.json))
        Ok("ok")
    }
  }

I tried different ways but I can't get this working. Maybe I should do a formatter in the model. Which would be the best way to cast the request json to a request x-www-form-urlencoded?

thank you

by agusgambina at May 24, 2015 08:57 AM

form.fill() not working in one project, works fine in another

I have 2 projects with the names:

  • old
  • new

I made a form in "old" with a radiobutton and some textfields to enter age, weight, ... Then I made a new project (activator new new) and developed the form further. The controller pre-fills the form, pre-selects the radiobutton etc. Now I wanted to update "old" and copied the code from "new" into "old". In ALL controller-classes, in ALL model-classes, in ALL views-classes is the EXACT same code! I even checked the file sizes manually several times, BUT in "old" the form does not get pre-filled! No matter what I do, nothing happens. I have no clue why this happens and what do to.

My code:

Application.java:

package controllers;

import models.User;
import play.data.Form;
import play.mvc.Controller;
import play.mvc.Result;

public class Application extends Controller {

    static Form<User> userForm = Form.form(User.class);

    public static Result index() {

        User user = new User();
        Form<User> preFilledForm = userForm.fill(user);

        return ok(views.html.index.render(preFilledForm));
    }
}

User.java:

package models;

public class User {
    public Integer gewicht;
    public Integer groesse;
    public Integer alter;

    public Float grundUmsatz;

    public String geschlecht = "Mann";

    public User(){
        gewicht = 0;
        groesse = 0;
        alter = 0;
        geschlecht = "Mann";
    }
}

index.scala.html:

@(userForm : Form[User])

@import helper._
@import helper.twitterBootstrap._

@main("App - index") {

    @helper.form(action = routes.Application.submit(), 'id -> "userForm"){
        <fieldset>
            @helper.inputRadioGroup(
            userForm("Geschlecht"),
            options = options("Mann"->"Mann","Frau"->"Frau"),
            '_label -> "Gender",
            '_error -> userForm("Geschlecht").error.map(_.withMessage("select gender"))
            )
        </fieldset>

        @helper.inputText(userForm("Gewicht"))
        @helper.inputText(userForm("Groesse"))
        @helper.inputText(userForm("Alter"))
        <input type="submit" class="btn primary" value="Send">
    }
}

main.scala.html:

@(title: String)(content: Html)

<!DOCTYPE html>

<html>
    <head>
        <title>@title</title>
        <link rel="stylesheet" media="screen" href="@routes.Assets.at("stylesheets/main.css")">
        <link rel="shortcut icon" type="image/png" href="@routes.Assets.at("images/favicon.png")">
        <script src="@routes.Assets.at("javascripts/hello.js")" type="text/javascript"></script>
    </head>
    <body>
        @content
    </body>
</html>

routes-file:

# Routes
# This file defines all application routes (Higher priority routes first)
# ~~~~

# Home page
GET     /                           controllers.Application.index()
POST    /auswertung/                controllers.Application.submit()

# Map static resources from the /public folder to the /assets URL path
GET     /assets/*file               controllers.Assets.at(path="/public", file)

by hamena314 at May 24, 2015 08:44 AM

Default parameter in main method and pretty print a map in Scala

This program print a table of factorial up to a given number. Given number 10, the output would be like this:

1 != 1
2 != 2
3 != 6
4 != 24
5 != 120
6 != 720
7 != 5040
8 != 40320
9 != 362880
10!= 3628800

Here is my attempt: (there's error in it)

package nick
import Factorial.factorial
import scala.collection.immutable.TreeMap

object PPrintFactorial {
  def main(args: Array[String] = Array("12")) {
    //    if (args.length > 0)
    val listNum = 1 to args(1).toInt
    val listFac = listNum.map(factorial)
    val numFacPair = TreeMap((listNum zip listFac): _*)
    var padding: String = " "
    //    for (sp <- listNum)
    //      padding = " " * (10 - sp.toString.length)  
    for ((k, v) <- numFacPair) println(k + padding + " ! = " + v)
  }
}

Question:

  1. Is the default parameter allowed in the main method? If yes, how to set it?
  2. How to adjust padding when printing a map? As you can see in my commented part of the code, I failed add padding properly.

Edit

The problem with formatting:

There's one more space in front of ! in 10!= 3628800 than others. If the input number is three digits, there would be two more spaces in front of the 100 !=line. That is, the ! is not vertically aligned.

by Nick at May 24, 2015 08:44 AM

/r/scala

StackOverflow

RxJava: how to compose multiple Observables with dependencies and collect all results at the end?

I'm learning RxJava and, as my first experiment, trying to rewrite the code in the first run() method in this code (cited on Netflix's blog as a problem RxJava can help solve) to improve its asynchronicity using RxJava, i.e. so it doesn't wait for the result of the first Future (f1.get()) before proceeding on to the rest of the code.

f3 depends on f1. I see how to handle this, flatMap seems to do the trick:

Observable<String> f3Observable = Observable.from(executor.submit(new CallToRemoteServiceA()))
    .flatMap(new Func1<String, Observable<String>>() {
        @Override
        public Observable<String> call(String s) {
            return Observable.from(executor.submit(new CallToRemoteServiceC(s)));
        }
    });

Next, f4 and f5 depend on f2. I have this:

final Observable<Integer> f4And5Observable = Observable.from(executor.submit(new CallToRemoteServiceB()))
    .flatMap(new Func1<Integer, Observable<Integer>>() {
        @Override
        public Observable<Integer> call(Integer i) {
            Observable<Integer> f4Observable = Observable.from(executor.submit(new CallToRemoteServiceD(i)));
            Observable<Integer> f5Observable = Observable.from(executor.submit(new CallToRemoteServiceE(i)));
            return Observable.merge(f4Observable, f5Observable);
        }
    });

Which starts to get weird (mergeing them probably isn't what I want...) but allows me to do this at the end, not quite what I want:

f3Observable.subscribe(new Action1<String>() {
    @Override
    public void call(String s) {
        System.out.println("Observed from f3: " + s);
        f4And5Observable.subscribe(new Action1<Integer>() {
            @Override
            public void call(Integer i) {
                System.out.println("Observed from f4 and f5: " + i);
            }
        });
    }
});

That gives me:

Observed from f3: responseB_responseA
Observed from f4 and f5: 140
Observed from f4 and f5: 5100

which is all the numbers, but unfortunately I get the results in separate invocations, so I can't quite replace the final println in the original code:

System.out.println(f3.get() + " => " + (f4.get() * f5.get()));

I don't understand how to get access to both those return values on the same line. I think there's probably some functional programming fu I'm missing here. How can I do this? Thanks.

by Steve Kehlet at May 24, 2015 08:25 AM

/r/emacs

How to move init to org-babel?

I am still learning emacs and org-mode. At this speed it will take me a long time to figure out how to use source code in org and then to not have it break down time and again.

Meanwhile, my init has become quite messy and unmanageable. Could you please guide me on moving my init to org-babel. I searched but couldn't find a step by step or do this and that tutorial.

submitted by curious-scribbler
[link] [3 comments]

May 24, 2015 08:07 AM

/r/clojure

Clojure (parameter) naming convention

Hi! I'm reading Manning's Joy of Clojure an I find myself trying to figure out code snippets like this along the way:

(defn keys-apply [f ks m] (let [only (select-keys m ks)] (zipmap (keys only) (map f (vals only))))) 

Is using cryptic, non-descripting parameter names, like 'f', 'ks', 'm' the usual, recommended way of writing Clojure programs? What is the convention? I think [func keys map] would be better, but in that case map would shadow out (not sure about the term here) the map function.

What would you do in this situation? Use the fully qualified name for the function (clojure.core/map for example) or prepend the variable name with something (_map for example)? Or just leave 'm' there and let the reader guess what it means?

submitted by DavsX
[link] [6 comments]

May 24, 2015 08:07 AM

QuantOverflow

Build spot rate curve with multiple treasuries for each maturity

I have the following treasuries:

  1. T 0 1/4 01/31/15 at 100.1236
  2. T 2 1/4 01/31/15 at 101.1257
  3. T 0 1/4 02/15/15 at 100.1251
  4. T 4 02/15/15 at 101.9994
  5. T 11 1/4 02/15/15 at 105.6269
  6. T 0 1/4 02/28/15 at 100.1237
  7. T 2 3/8 02/28/15 at 101.1878
  8. T 0 3/8 03/15/15 at 100.1866
  9. T 0 1/4 03/31/15 at 100.1182
  10. T 2 1/2 03/31/15 at 101.2421
  11. T 0 3/8 04/15/15 at 100.1784
  12. T 0 1/8 04/30/15 at 100.0554
  13. T 2 1/2 04/30/15 at 101.2375
  14. T 0 1/4 05/15/15 at 100.1103
  15. T 4 1/8 05/15/15 at 102.0451
  16. T 2 1/8 05/31/15 at 101.0417
  17. T 0 1/4 05/31/15 at 100.1095
  18. T 0 3/8 06/15/15 at 100.1644
  19. T 0 3/8 06/30/15 at 100.1617
  20. T 1 7/8 06/30/15 at 100.9101

And I want to calculate the 6 month spot rate curve from today date. When I do this I get negative returns for the spot rate. I followed the BEY convention and used this question as reference. I got negative spot rates for the first part of the curve. Is this correct? Another point to consider is that I have multiple securities for the same expiration date (i.e. 1 and 2, 6 and 7) so when I build the spot rate I get two of them for one maturity. Which method should I use to ponder this.

by capm at May 24, 2015 08:05 AM

StackOverflow

Akka. How to know that all children actors finished their job

I created Master actor and child actors (created using router from Master).

Master receives some Job and splits this job into small tasks and sends them to child actors (to routees).

The problem I trying to solve is how I can properly notify my Master when child actors finished their job?

In some tutorials (Pi approximation and in example from Scala In Action book) the Master actor after receiving response from children trying to compare the size of initial array of task with size of received results:

if(receivedResultsFromChildren.size == initialTasks.size) {
    // it's mean children finished their job
}

But I think it is very bad, because if some child actor throws exception then it will not send result back to sender (back to Master), so this condition will never evaluate to true.

So how properly notify master that all children finished their jobs?

I think one of the option is to Broadcast(PoisonPill) to children and then listen to Terminated(router) message (using so-called deathWatch). Is it ok solution?

If using Broadcast(PoisonPill) is better, then should I register some supervising strategy which will stop some routee in case of exception? Because if exception occurs, then routee will be restarted as I know, and it's mean that Master actor will never receive Terminated(router). Is it correct?

by MyTitle at May 24, 2015 07:46 AM

CompsciOverflow

Placing a fixed number of points on a curve to make the finite second derivative constant

I am given a 1D curve $f(x)$ that starts at $a$ and ends at $b$. I have $n$ points I have to place on the curve, and I have to place two points at $a$ and $b$, respectively. Is there a way to place points $a=x_1<...<x_n=b$ along the line so that the finite second derivative (i.e., $f(x_{i-1})-2f(x_i)+f(x_{i+1})$) is constant for all $i=2,...,n-1$? In my particular problem, $f(x)=-\log(x)$ on the interval $[\frac{1}{t},1]$. Maybe this additional structure makes the problem more tractable.

by huehue at May 24, 2015 07:30 AM

StackOverflow

How do you add elements to a set with DataStax QueryBuilder?

I have a table whose column types are

text, bigint, set<text> 

I'm trying to update a single row and add an element to the set using QueryBuilder.

The code that overwrites the existing set looks like this (note this is scala):

val query = QueryBuilder.update("twitter", "tweets")
  .`with`(QueryBuilder.set("sinceid", update.sinceID))
  .and(QueryBuilder.set("tweets", setAsJavaSet(update.tweets)))
  .where(QueryBuilder.eq("handle", update.handle))

I was able to find the actual CQL for adding an element to a set which is:

UPDATE users
SET emails = emails + {'fb@friendsofmordor.org'} WHERE user_id = 'frodo';

But could not find an example using QueryBuilder.

Based off of the CQL I also tried:

  .and(QueryBuilder.set("tweets", "tweets"+{setAsJavaSet(update.tweets)}))

But it did not work. Thanks in advance

by plambre at May 24, 2015 07:05 AM

Halfbakery

StackOverflow

Slick 3.0.0 Aggregate Querries

I'm trying to do a complex querying on two tables. They look like below:

Table1:
  id:
  name:
  table2Id:
  version:

Table2:
  id:
  comments:

Assuming that I have the appropriate Slick classes that represents these tables, I'm trying to get all the elements from Table1 and Table 2 that satisfies the following condition:

Table1.table2Id === Table2.id and max(Table1.version)

I tried the following:

val groupById = (for {
  elem1 <- table1Elems
  elem2 <- table2Elems if elem1.id === elem2.id
} yield (elem1, elem2)).groupBy(_._1.id)

I know that I have to map the groupById and look for the max version, but I'm not getting the syntax right! Any help?

What I need is effectively Slick equivalent of the following SQL query:

SELECT * from t t1 WHERE t1.rev = (SELECT max(rev) FROM t t2 WHERE t2.id = t1.id)

by user3102968 at May 24, 2015 06:53 AM

Halfbakery

UnixOverflow

"Call to undefined function session_start() /blah/file line 2" [on hold]

I ran a normal update portsnap fetch update && portmaster -Da which ran without error. I have not been able to restore sessions since.

error:

Fatal error: Call to undefined function session_start() /blah/file line 2

code:

session_start();

info:

cat /var/db/ports/lang_php56/options
# This file is auto-generated by 'make config'.
# Options for php56-5.6.5
_OPTIONS_READ=php56-5.6.5
_FILE_COMPLETE_OPTIONS_LIST=CLI CGI FPM EMBED PHPDBG DEBUG DTRACE IPV6 MAILHEAD LINKTHR ZTS
OPTIONS_FILE_SET+=CLI
OPTIONS_FILE_SET+=CGI
OPTIONS_FILE_SET+=FPM
OPTIONS_FILE_UNSET+=EMBED
OPTIONS_FILE_UNSET+=PHPDBG
OPTIONS_FILE_UNSET+=DEBUG
OPTIONS_FILE_UNSET+=DTRACE
OPTIONS_FILE_SET+=IPV6
OPTIONS_FILE_UNSET+=MAILHEAD
OPTIONS_FILE_SET+=LINKTHR
OPTIONS_FILE_SET+=ZTS

all php packages have been reinstalled via ports source error free including sessions, php56, mod_php and extensions

[\u@vader:/usr/ports/www/php56-session] # portmaster -Da
===>>> Starting check of installed ports for available updates

===>>> All ports are up to date

php -v

PHP 5.6.8 (cli) (built: May 7 2015 15:14:00)
Copyright (c) 1997-2015 The PHP Group
Zend Engine v2.6.0, Copyright (c) 1998-2015 Zend Technologies
with Zend OPcache v7.0.4-dev, Copyright (c) 1999-2015, by Zend Technologies 

by nix at May 24, 2015 04:48 AM

StackOverflow

Redirect http request to https (spray io/scala)

How do I redirect my http requests to https? Though this has been a common question, I don't see any solution for spray io.

  • I have enabled ssl using Apache CamelSslConfiguration which works just fine.
  • I have setup port forwarding (iptables) from 80 -> 8081 and 443 -> 8081

Below is my Boot.scala file

object Boot extends App with CamelSslConfiguration with ApiScheduler {

  // we need an ActorSystem to host our application in
  implicit val system = ActorSystem("on-spray-can")

  // create and start our service actor
  val service = system.actorOf(Props[ApiServiceActor], "api-service")

  implicit val timeout = Timeout(50.seconds)
  // start a new HTTP server on port 8080 with our service actor as the handler
  IO(Http) ? Http.Bind(service, interface = "0.0.0.0", port = 8081)
}

by user2489122 at May 24, 2015 04:44 AM

CompsciOverflow

Maximum Flow with Binary Capacities

Consider the problem of finding a maximum flow from node $s$ to node $t$ in a directed graph where each link has capacity either $0$ or $1$. What is the state of the art regarding how fast this flow can be found?

It seems that the Dinic Algorithm will accomplish this $O \left( m n^{2/3} \right)$ where $n$ is the number of nodes and $m$ is the number of vertices. From table 1 in this paper, it seems reasonable to guess this was still the state of the art in 2001. Is this the best that is currently known?

by Pramod T. at May 24, 2015 04:43 AM

Can Circuit Value Problem or HORN-SAT be reduced to PATH problem?

PATH = {(X,R,S,T) | exists an x in S that is admissible} Where R is a relation of X x X x X, S is a unary relation of X and T is a unary relation of X aswell. An x element of X is admissible if it is in T or if there is two elements y z both admissibles, where (x,y,z) is in R.

So, is there any logspace reduction from CVP or HORN-SAT to this problem, so I can prove that PATH is P-Complete?

by gnar at May 24, 2015 04:24 AM

Draw a DFA that accepts ((aa*)*b)*

A homework question asks me to a draw a DFA for the regular expression

$((aa^*)^*b)^*$

I'm having trouble with this because I'm not sure how to express the idea of $a$ followed by $0$ or many $a$'s many times, in terms of a DFA.

I would guess it that $(aa^*)^*$ should be the same thing as $\lambda + a^*$ but I'm not sure if I can formally say that. If I could, it would make my DFA simply

Simple DFA

by Imray at May 24, 2015 04:09 AM

QuantOverflow

Building curves using onshore or offshore JPY overnight rates?

I am trying to build Japanese Yen interest rate curves. When defining the curve instruments for the 'OIS' (discount) curve (aka TONAR), I am uncertain as to which rate to use for the overnight deposit at the short end. Should I be using the offshore jpy depo rate or the onshore BoJ depo rate; and why?

by Armen Safieh-Garabedian at May 24, 2015 04:05 AM

Halfbakery

StackOverflow

Internal Error when type checking the RPS example

Here is the example from core.typed github page:

(ns typedclj.rps-async
  (:require [clojure.core.typed :as t]
            [clojure.core.async :as a]
            [clojure.core.typed.async :as ta]))

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Types
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(t/defalias Move
            "A legal move in rock-paper-scissors"
            (t/U ':rock ':paper ':scissors))

(t/defalias PlayerName
            "A player's name in rock-paper-scissors"
            t/Str)

(t/defalias PlayerMove
            "A move in rock-paper-scissors. A Tuple of player name and move"
            '[PlayerName Move])

(t/defalias RPSResult
            "The result of a rock-paper-scissors match.
            A 3 place vector of the two player moves, and the winner"
            '[PlayerMove PlayerMove PlayerName])

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Implementation
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(t/ann MOVES (t/Vec Move))
(def MOVES [:rock :paper :scissors])

(t/ann BEATS (t/Map Move Move))
(def BEATS {:rock :scissors, :paper :rock, :scissors :paper})

(t/ann rand-player [PlayerName -> (ta/Chan PlayerMove)])
(defn rand-player
  "Create a named player and return a channel to report moves."
  [name]
  (let [out (ta/chan :- PlayerMove)]
    (ta/go (while true (a/>! out [name (rand-nth MOVES)])))
    out))

(t/ann winner [PlayerMove PlayerMove -> PlayerName])
(defn winner
  "Based on two moves, return the name of the winner."
  [[name1 move1] [name2 move2]]
  (cond
    (= move1 move2) "no one"
    (= move2 (BEATS move1)) name1
    :else name2))

(t/ann judge [(ta/Chan PlayerMove) (ta/Chan PlayerMove) -> (ta/Chan RPSResult)])
(defn judge
  "Given two channels on which players report moves, create and return an
  output channel to report the results of each match as [move1 move2 winner]."
  [p1 p2]
  (let [out (ta/chan :- RPSResult)]
    (ta/go
      (while true
        (let [m1 (a/<! p1)
              m2 (a/<! p2)]
          (assert m1)
          (assert m2)
          (a/>! out (t/ann-form [m1 m2 (winner m1 m2)]
                                RPSResult)))))
    out))

(t/ann init (t/IFn [PlayerName PlayerName -> (ta/Chan RPSResult)]
                   [-> (ta/Chan RPSResult)]))
(defn init
  "Create 2 players (by default Alice and Bob) and return an output channel of match results."
  ([] (init "Alice" "Bob"))
  ([n1 n2] (judge (rand-player n1) (rand-player n2))))

(t/ann report [PlayerMove PlayerMove PlayerName -> nil])
(defn report
  "Report results of a match to the console."
  [[name1 move1] [name2 move2] winner]
  (println)
  (println name1 "throws" move1)
  (println name2 "throws" move2)
  (println winner "wins!"))

(t/ann play [(ta/Chan RPSResult) -> nil])
(defn play
  "Play by taking a match reporting channel and reporting the results of the latest match."
  [out-chan]
  (let [[move1 move2 winner] (a/<!! out-chan)]
    (assert move1)
    (assert move2)
    (assert winner)
    (report move1 move2 winner)))

(t/ann play-many [(ta/Chan RPSResult) t/Int -> (t/Map t/Any t/Any)])
(defn play-many
  "Play n matches from out-chan and report a summary of the results."
  [out-chan n]
  (t/loop [remaining :- t/Int, n
           results :- (t/Map PlayerName t/Int), {}]
          (if (zero? remaining)
            results
            (let [[m1 m2 winner] (a/<!! out-chan)]
              (assert m1)
              (assert m2)
              (assert winner)
              (recur (dec remaining)
                     (merge-with + results {winner 1}))))))


(fn []
  (t/ann-form (a/<!! (init))
              (t/U nil RPSResult)))

If you check it in the repl:

(clojure.core.typed/check-ns 'typedclj.rps-async)

You get an error:

Initializing core.typed ...
Building core.typed base environments ...
Finished building base environments
"Elapsed time: 9213.869003 msecs"
core.typed initialized.
Start collecting typedclj.rps-async
Start collecting clojure.core.typed.async
Finished collecting clojure.core.typed.async
Finished collecting typedclj.rps-async
Collected 2 namespaces in 1725.941447 msecs
Not checking clojure.core.typed (does not depend on clojure.core.typed)
Not checking clojure.core.async (does not depend on clojure.core.typed)
Not checking clojure.core.async.impl.channels (does not depend on clojure.core.typed)
Not checking clojure.core.async.impl.ioc-macros (does not depend on clojure.core.typed)
Not checking clojure.core.async.impl.protocols (does not depend on clojure.core.typed)
Not checking clojure.core.async.impl.dispatch (does not depend on clojure.core.typed)
Not checking clojure.core.typed.util-vars (does not depend on clojure.core.typed)
Start checking clojure.core.typed.async
Checked clojure.core.typed.async in 502.126883 msecs
Start checking typedclj.rps-async
Type Error (typedclj/rps_async.clj:91:13) Internal Error (typedclj/rps_async.clj:91:5) Bad call to path-type: nil, ({:idx 0})
ExceptionInfo Type Checker: Found 1 error  clojure.core/ex-info (core.clj:4403)

What went wrong here?

by qed at May 24, 2015 03:44 AM

Race conditions in pure functional programming

I have encountered with such statement:

"Programming in a functional style makes the state presented to your code explicit, which makes it much easier to reason about, and, in a completely pure system, makes thread race conditions impossible."

I see this point of view, but how can I achieve this in real-world problems?

For example:

There is a functional program with two functions:

def getMoney(actMoney: Integer, moneyToGet: Integer): Integer 
    = actMoney - moneyToGet

def putMoney(actMoney: Integer, moneyToPut: Integer): Integer  
    = actMoney + moneyToPut

Then, I really would like to define functions getActualMoney and saveActualMoney for a given Account, but I can't, they are not pure. That's because I get Money for a given Account from some memory and I save Money for a given Account to some memory (there is state).

def getActualMoney(accountNo: String): Integer = {...}

def saveActualMoney(accountNo: String, actMoney: Integer): Unit = {...}

So I have to get my current Money from "outside". And let's say, that my program is working in such way. Now I have two simultaneous requests, first: get some money, second put some money for the same account. Of course I will get two different results. So there is a race condition.

I understand, that I should make a transaction on this account "outside" programming code. So that, such situation should not have happened. For a better concurrency, functions should look like that:

def getMoney(
        acountNo: String, 
        actMoney: Integer, 
        moneyToGet: Integer): (String, Integer) 
    = (acountNo, actMoney - moneyToGet)

def putMoney(
        acountNo: String,
        actMoney: Integer, 
        moneyToPut: Integer): (String, Integer) 
    = (acountNo, actMoney + moneyToPut)        

Is it what is going about? Is it worth doing?

by galvanize at May 24, 2015 03:38 AM

TheoryOverflow

The meaning of separations in cryptography

From the paper of Impagliazzo and Rudich that separates black-box key agreement from one-way permutation:

We provide strong evidence that it will be difficult to prove that secure secret agreement is possible assuming only that a one-way permutation exists. We model the existence of a one-way permutation by allowing all parties access to a randomly chosen permutation oracle. A random permutation oracle is provable one-way in the strongest possible sense.

Given that we don't know if one-way function exists, how should one interpret this?

Also, they exhibit an oracle relative to which one-way permutation exists, but not key agreement. Again, how is that possible? What does that mean? A key agreement can be constructed independently ignoring both the oracle and the one-way permutation.

(P.S., I'm an undergrad trying to understand these results on my own. I couldn't find textbook-like results on the topic. These might be trivial questions, but I couldn't move forward without first understanding them. A clear explanation would be greatly appreciated and beneficial for other newbies.)

by user34219 at May 24, 2015 03:31 AM

StackOverflow

Datastax java-driver QueryBuilder.update issue in scala

Here is the scala code I am attempting to use to update a row in a cassandra database:

 val query = QueryBuilder.update("twitter","tweets")
             .`with`(QueryBuilder.set("sinceid", update.sinceID)
             .and(QueryBuilder.set("tweets", update.tweets)))
             .where(QueryBuilder.eq("handle", update.handle));

which is based off of the suggestion here

Everything seems to work correctly except for the ".and". The error I get back is

value and is not a member of com.datastax.driver.core.querybuilder.Assignment

Which, of course, is true, but at that point in the statement it should be using

com.datastax.driver.core.querybuilder.Update.Conditions

Which contains the and()

Thank you for your help

by plambre at May 24, 2015 03:13 AM

Any way to extract XML into constants in Scala?

Beginner to Scala here coming from a Java background. Suppose I have a bunch of XML just floating around in my code:

val x = <file> <name> some-file.txt </name> </file>

Is there any way I can extract the XML elements into named constants? I've tried the following, but it doesn't work as on the first line it's still expecting a closing </file> tag:

val FileStart = <file>
val FileEnd = </file>

I ask because I want to avoid magic values floating around in my code. I cringe at the thought of using the <name> tag 100 times and then having its value change ten months down the road (which would result in a terrible tag hunt). I'd much rather have it defined in a constant somewhere.

Better yet, is there a Scala way to efficiently approach this problem? Maybe I'm stuck in Java thinking.

by Dylan Knowles at May 24, 2015 03:12 AM

QuantOverflow

Pricing options under restricted domain

How would I price an option when the underlying security is unable to trade above a certain price? I assumed this would be as simple as restricting the limits of integration of the PDF to B (the barrier) instead of infinity but it doesn't work.

For example, if the present price is 30, the barrier is 40, and the strike is 35 then the option price will never exceed $5. If the strike exceeds the barrier the call option is always worthless.

After 5 days and numerous attempts it works for all all possible prices, barriers and strikes

Define barrier to be $p_d$

Strike $p_s$

Present price $p_1$

Criteria:

$p_s \le p_b$

if $p_b \le p_s$ the option is worthless

The barrier has two effects: it makes calls cheaper for all strikes and it can restrict the maximum call price from $p_1$ to something less

But the barrier itself is not a barrier option meaning that the option doesn't become worthless if it's crossed.

The maximum call price $m$ is:

$m=(p_1-g_1)H(p_1-g_1)$

$g_1=p_s-\left[(p_d-p_1)+(p_s+p_1-p_d)H(p_d-p_1-p_s)\right]$

Where H is the heaviside function

This can be tested plugging in various barriers, strikes, and initial prices. If $p_1=30,p_s=35,p_d=40$ then $m=5$

In general, if $p_1 \le p_d-p_s$ then $m=p_1$

And otherwise $m=p_d-p_s$

Between the strike and the barrier you have a restricted black-scholes :

$u=\ln(p_1)+(r-\alpha^2/2)t$

$a=\sqrt{t}\alpha$

$\int_{p_s}^{p_b}{\frac{e^{-rt}(y-p_s)}{ay\sqrt{2\pi}}\exp{\left[-\frac{(\ln(y) - u)^2}{2a^2 }\right]}}\,dy$

As $p_b \to \infty$ you have the classic black-scholes.

As $\alpha,r,t \to \infty$ it goes to zero and the maximum theroetical price $m$ takes over. Otherwise the call is somewhere in-between.

Probability of expiring above the barrier:

$N(d_1)=\int_{p_d}^{\infty}{\frac{e^{-rt}}{p_1a\sqrt{2\pi}}\exp{\left[-\frac{(\ln(y) - u)^2}{2a^2 }\right]}}\,dy$

And a probability below it: $1-N(d_1)$

Evaluating the integrals and substituting:

$\begin{align} d_1 &= \frac{1}{\alpha\sqrt{t}}\left[\ln\left(\frac{p_1}{p_d}\right) + \left(r + \frac{\alpha^{2}}{2}\right)t\right] \\ d_2 &= \frac{1}{\alpha\sqrt{t}}\left[\ln\left(\frac{p_1}{p_d}\right) + \left(r - \frac{\alpha^{2}}{2}\right)t\right] \\ d_3 &= \frac{1}{\alpha\sqrt{t}}\left[\ln\left(\frac{p_1}{p_s}\right) + \left(r + \frac{\alpha^{2}}{2}\right)t\right] \\ d_4 &= \frac{1}{\alpha\sqrt{t}}\left[\ln\left(\frac{p_1}{p_s}\right) + \left(r - \frac{\alpha^{2}}{2}\right)t\right] \\ \end{align} $

The call is:

$\left[mN(d_1)+(p_1(N(d_3)-N(d_1))+p_se^{-rt}(N(d_2)-N(d_4)))(1-N(d_1))\right]H(p_d-p_s)$

by quantus at May 24, 2015 03:06 AM

Fefe

Israel hat eine neue Vize-Außenministerin. Sie heißt ...

Israel hat eine neue Vize-Außenministerin. Sie heißt Tzipi Hotovely.
Israel’s new deputy foreign minister on Thursday delivered a defiant message to the international community, saying that Israel owes no apologies for its policies in the Holy Land and citing religious texts to back her belief that it belongs to the Jewish people.
Frau Hotovely ist unterstützt die Siedler in der West Bank und ist dagegen, Land an die Palästinenser abzugeben. Netanjahu ist auch der amtierende Außenminister, daher ist sie jetzt der ranghöchste Vollzeit-Diplomat Israels.
In an inaugural address to Israeli diplomats, Hotovely said Israel has tried too hard to appease the world and must stand up for itself.

“We need to return to the basic truth of our rights to this country,” she said. “This land is ours. All of it is ours. We did not come here to apologise for that.”

May 24, 2015 03:01 AM

TheoryOverflow

Applications for set theory, ordinal theory, infinite combinatorics and general topology in computer science?

I am a mathematician interested in set theory, ordinal theory, infinite combinatorics and general topology.

Are there any applications for these subjects in computer science? I have looked a bit, and found a lot of applications (of course) for finite graph theory, finite topology, low dimensional topology, geometric topology etc.

However, I am looking for applications of the infinite objects of these subjects, i.e. infinite trees (Aronszajn trees for example), infinite topology etc.

Any ideas?

Thank you!!

by user135172 at May 24, 2015 02:34 AM

Lobsters

StackOverflow

Proof by induction with multiple lists

I am following the Functional Programming in Scala lecture on Coursera and at the end of the video 5.7, Martin Odersky asks to prove by induction the correctness of the following equation :

(xs ++ ys) map f = (xs map f) ++ (ys map f)

How to handle proof by induction when there are multiple lists involved ?

I have checked the base cases of xs being Nil and ys being Nil. I have proven by induction that the equation holds when xs is replaced by x::xs, but do we also need to check the equation with ys replaced by y::ys ?

And in that case (without spoiling the exercise too much...which is not graded anyway) how do you handle : (xs ++ (y::ys)) map f ?

This is the approach I have used on a similar example, to prove that

(xs ++ ys).reverse = ys.reverse ++ xs.reverse

Proof (omitting the base case, and easy x::xs case) :

(xs ++ (y::ys)).reverse
= (xs ++ (List(y) ++ ys)).reverse         //y::ys = List(y) ++ ys
= ((xs ++ List(y)) ++ ys).reverse         //concat associativity
= ys.reverse ++ (xs ++ List(y)).reverse   //by induction hypothesis (proven with x::xs)
= ys.reverse ++ List(y).reverse ++ xs.reverse //by induction hypothesis
= ys.reverse ++ (y::Nil).reverse ++ xs.reverse //List(y) = y :: Nil
= ys.reverse ++ Nil.reverse ++ List(y) ++ xs.reverse //reverse definition
= (ys.reverse ++ List(y)) ++ xs.reverse //reverse on Nil (base case)
= (y :: ys).reverse ++ xs.reverse         //reverse definition

Is this right ?

by Gael at May 24, 2015 02:31 AM

QuantOverflow

How to identify the orders p and q for ARIMA model using least squares method?

I would like to identify the orders p and q for ARIMA model using least squares method in Matlab. I have got also two data files (one with noise and one without)

Previously I identified p and q for AR and MA using ACF function and PACF, but now I have mixed model (ARIMA).

Could you give any tip hint how to do this?

by Misiek777 at May 24, 2015 02:22 AM

/r/compsci

Best intro to large DB design?

I am a fairly experienced coder but I am working on my first large scale project. We will be making a system involving potentially 1000s of users, what books or sites should I go to for some information? We will be interfacing with AWS a LOT, so DynamoDB was the first system we have looked at.

submitted by thedrpickles
[link] [4 comments]

May 24, 2015 02:22 AM

StackOverflow

ansible lineinfile insert after every match

I would like to use the ansible lineinfile module (or something similar) to insert a line after every match of a particular regexp. (lineinfile will only insert after the last match).

This seems so simple. I swear I tried my google-fu first.

by PythonNut at May 24, 2015 02:18 AM

ClojureScript multipart post through cljs-http?

I'm trying to post some files to my server through clojurescript. According to https://github.com/r0man/cljs-http, all I need to do is changing form-params to multipart-params, but once I do that my params are getting ignored.

(print data-array)
(go (let [{data :body} (<! (http/post "/submit.json"
                                        {:multipart-params {:foo "bar"}}))]))

The print gives

{:image0 #<[object File]>, :image1 #<[object File]>}

(My goal is it to post this array of files to my url. I changed it to :foo "bar" for debug purpose). If I change :multipart-params to :form-params, my params are in the request. If I change it to :multipart-params, they are getting ignored.

I'm confused in why this is happening. Does someone have a hint where to go from here?

by dvcrn at May 24, 2015 02:09 AM

Fefe

Obama hat die FISC-Reautorisierung für das NSA-Schnüffeln ...

Obama hat die FISC-Reautorisierung für das NSA-Schnüffeln gegen US-Bürger nicht verlängert. Das muss normalerweise alle 90 Tage geschehen, diesmal haben sie den Termin verstreichen lassen. Der Guardian formuliert es so:
The administration decision ensures that beginning at 5pm ET on 1 June, for the first time since October 2001 the NSA will no longer collect en masse Americans’ phone records.
Da werden ja jetzt bei dem Snowden die Korken knallen!

Schade nur, dass das Nicht-Amerikanern nicht weiterhilft.

May 24, 2015 02:01 AM

StackOverflow

Clojure editor/IDE recommendations on Mac OS X

I am starting to learn the Clojure programming language. Are there any recommendations for Clojure editors/IDEs on Mac OS X?

Update 2009-09-23: The Clojure space has changed tremendously since I originally posted this question. Many of the links below, especially those that refer to clojure-mode with Emacs, are out-of-date. The best Clojure IDE I found was the Enclojure Netbeans plugin which was recently released (2009-08-25).

Update 2010-04-30: Another very good article on this subject is Clojure IDEs - The Grand Tour by Lau B. Jensen. Also, for my own clojure development, I have actually moved to Emacs / swank-clojure.

by Julien Chastang at May 24, 2015 01:25 AM

How to use Stuart Sierra's component library in Clojure

I'm struggling to get my head around how to use Stuart Sierra's component library within a Clojure app. From watching his Youtube video, I think I've got an OK grasp of the problems that led to him creating the library; however I'm struggling to work out how to actually use it on a new, reasonably complex project.

I realise this sounds very vague, but it feels like there's some key concept that I'm missing, and once I understand it, I'll have a good grasp on how to use components. To put it another way, Stuart's docs and video go into the WHAT and WHY of components in considerable detail, but I'm missing the HOW.

Is there any sort of detailed tutorial/walkthrough out there that goes into:

  • why you'd use components at all for a non-trivial Clojure app
  • a methodology for how you'd break down the functionality in a non-trivial Clojure app, such that components can be implemented in a reasonably optimal fashion. It's reasonably simple when all you've got is e.g. a database, an app server and a web server tier, but I'm struggling to grasp how you'd use it for a system that has many different layers that all need to work together coherently
  • ways to approach development/testing/failover/etc. in a non-trivial Clojure app that's been built using components

Thanks in advance

by monch1962 at May 24, 2015 01:21 AM

How to construct an enumeration with a string using reflection and type information

This question is about Scala reflection. Consider this code:

object MyEnum extends Enumeration {
    type MyEnum = Value
    val En1, En2 = Value
}

case class Data(
    en: MyEnum,
    num: Int
)

def initData[T](d: Array[String])(implicit t: ClassTag[T]): T = {
    // using reflection to get type of the parameters
    val clazz = currentMirror.classSymbol(t.runtimeClass)
    val module = clazz.companion.asModule
    val im = currentMirror.reflect(currentMirror.reflectModule(module).instance)
    val ts = im.symbol.typeSignature
    val methodName = "apply"
    val constructor = ts.member(TermName(methodName)).asMethod
    for (ps <- constructor.paramLists; p <- ps) {
        p.info match {
            // here I can match typeOf[Int] and use String.toInt to convert the value for "num".
            // question, how to match p.info to Enumeration and get withName() method to construct the correct enumeration MyEnum?
        }
    }
    // reflect to construct case class `enter code here`of T
    (clazz.im.reflectionMethod(clazz.constructor))(params: _*).asInstanceOf[T]
}

This seems a relatively simple task in Java, but in Scala it is hard to link the Type and reconstruct a Scala object. Can you explain what is going wrong?

by Jie Huang at May 24, 2015 01:21 AM

zmq_ctx_destroy() hangs in MFC dll

I'm writing an extension to MFC app with use of ZMQ (zmq.hpp). When I'm trying to unload my DLL from the app, the zmq_ctx_destroy() function hangs forever.

I have found a similar issue but there is no answer.

I've tried to debug it and found out that it stops in function zmq::thread_t::stop() on the first line:

DWORD rc = WaitForSingleObject (descriptor, INFINITE);

It hung even without sending anything. Simplified code looks like this:

zmq::context_t context(1);
zmq::socket_t socket(context, ZMQ_REQ);
socket.connect(ENDPOINT.c_str());

Socket and context destroyed when leaving scope.

Call Stack:

libzmq-v100-mt-gd-4_0_4.dll! zmq::thread_t::stop()  Line 56 + 0x17 bytes    C++ 
libzmq-v100-mt-gd-4_0_4.dll! zmq::select_t::~select_t()  Line 57 + 0x13 bytes   C++ 
libzmq-v100-mt-gd-4_0_4.dll! zmq::select_t::`scalar deleting destructor'()  + 0x2c bytes    C++
libzmq-v100-mt-gd-4_0_4.dll! zmq::io_thread_t::~io_thread_t()  Line 39 + 0x37 bytes C++
libzmq-v100-mt-gd-4_0_4.dll! zmq::io_thread_t::`scalar deleting destructor'()  + 0x2c bytes C++
libzmq-v100-mt-gd-4_0_4.dll! zmq::ctx_t::~ctx_t()  Line 82 + 0x49 bytes C++
libzmq-v100-mt-gd-4_0_4.dll! zmq::ctx_t::`scalar deleting destructor'()  + 0x2c bytes   C++
libzmq-v100-mt-gd-4_0_4.dll! zmq::ctx_t::terminate()  Line 153 + 0x3d bytes C++
libzmq-v100-mt-gd-4_0_4.dll! zmq_ctx_term(void * ctx_)  Line 171 + 0xa bytes    C++
libzmq-v100-mt-gd-4_0_4.dll! zmq_ctx_destroy(void * ctx_)  Line 242 C++
DataReader.dll! zmq::context_t::close()  Line 309 + 0xe bytes   C++
DataReader.dll! zmq::context_t::~context_t()  Line 303  C++

The MFC app has a mechanism to run specifically created DLLs. This DLL is based on CWinApp, all DLL-specific initialization code in the InitInstance member function and termination code in ExitInstance. So this JIRA issue should not be the case.

The app also relies on sockets.

by tr group at May 24, 2015 01:12 AM

Lobsters

CompsciOverflow

Dijkstra function for navigation for disadvantaged

Is there a way we can write a function for Dijkstra to determine which node to enqueue and which to discard. This is for a navigation solution for people with disabilities where path to stairs may be shorter but not preferable to wheelchair bound individuals. The navigation solution should also support all individuals and only give a path discarding stairs if the specified user group is someone who is wheelchair bound etc. So basically i am attempting to write a function that checks if user group = wheelchair bound then removes the shortest path to stairs and gives the path to the nearest elevator. I am pretty new to this so any assistance would be very much appreciated. My apologies for being unclear previously.

by Jason6916 at May 24, 2015 12:20 AM

StackOverflow

post and pre increment not working as expected for REPL variable in scala

I know that there is no ++ and -- in scala instead I have to use += and -=

But when I try

scala> var a=2
a: Int = 2

scala> a +=1

scala> a
res11: Int = 3

the above works fine but not the below one

scala> 5
res13: Int = 5

scala> res13 +=1
<console>:9: error: value += is not a member of Int
              res13 +=1
                    ^

The type of both variable a and res13 is Int but the second case is not working as the first case.

Can anybody help?

by codegasmer at May 24, 2015 12:18 AM

DataTau

HN Daily

May 23, 2015

CompsciOverflow

Disadvantages to using simple step functions for activation in neural networks?

From what I have read, the main advantage to using tanh(x) or sigmoid(x) as an activation function for neural networks is that it is very easily differentiable.

I am trying to implement a neural network that uses a genetic algorithm for optimisation rather than backpropogation, and therefore it doesnt matter if my activation function is differentiable or not

When running my algorithm on matlab/octave, it seems like there is a massive bottleneck in the computation when it is trying to calculate sigmoid of the input to the neuron, so I was wondering if there would be any disadvantage to using a simple step function instead of the more complex sigmoid or tanh functions to activate the neuron?

by guskenny83 at May 23, 2015 11:58 PM

weights in a simple neural network

I have seen that in the material made by Andrew Ng about neural networks, he uses the following weights:

enter image description here

so when I replace the final values of h_theta(x) in the formula:

enter image description here

I got values near 0 and 1.

The problem that I have is when I use the values that are in the Tom Mithell book about Machine Learning, he uses the values w0=-0.8, w1=w2=0.5. I have performed the calculations and got the values of h_theta(x) as the following:

0.8,0.3,0.3,-0.2

but when I replace these set of values into the formula of the sigmoid function I got:

0.31,0.42,0.42,0.54

that does not differentiate too much about which values are near 0 and which near 1, something that did not occur with the values that Andrew Ng proposed for the weights. Am I missing something? in this case it would be better to select big values for the weights instead of small ones?

Any help for clarifying this is welcome

by Layla at May 23, 2015 11:48 PM

StackOverflow

Strange IllegalArgumentException in clojure

I am trying to implement the A* search algorithm in clojure (not quite finished yet):

(ns typedclj.rhizome
  (:require [clojure.set :refer [union]]))


(use 'clojure.pprint)
(defmacro epprint [expr]
  `(do (pprint '~expr)
       (pprint ~expr)))
(defmacro epprints [& exprs]
  (list* 'do (map (fn [x] (list 'epprint x))
                  exprs)))

(epprints (inc 1) (inc 2))

(def world [[1 1 1 1 1]
            [999 999 999 999 1]
            [1 1 1 1 1]
            [1 999 999 999 999]
            [1 1 1 1 1]])

(def ^:dynamic *world-size* 5)
(def ^:dynamic *step-cost* 900)
;(alter-var-root #'*world-size* (constantly 5))

(defn neighbors
  ([yx]
   (neighbors [[-1 0] [1 0] [0 -1] [0 1]] *world-size* yx))
  ([deltas *world-size* yx]
   (filter (fn [new-yx] (every? #(< -1 % *world-size*)
                                new-yx))
           (map #(vec (map + yx %)) deltas))))

(defn heuristic-cost-estimate [[y x]]
  (* *step-cost*
     (- (+ *world-size* *world-size*) y x 2)))

(defn mapmat [f mat]
  (mapv (fn [row]
          (mapv f row))
        mat))
(defn mapmats [f & mats]
  (apply mapv (fn [& rows]
                (apply mapv f rows))
         mats))
(defn randmat []
  (repeatedly *world-size*
              (fn [] (repeatedly
                       *world-size*
                       #(rand-int 10)))))

(defn min-by [f coll]
  (when (seq coll)
    (reduce (fn [min this]
              (if (> (f min) (f this)) this min))
            coll)))

(pprint (randmat))
(let [m1 (randmat)
      m2 (randmat)
      m3 (randmat)]
  (epprints m1 m2 m3)
  (mapmats + m1 m2 m3))

(defn constant-matrix [c]
  (vec (repeat *world-size*
               (vec (repeat *world-size* c)))))

(let [coords (for [i (range *world-size*)]
               (for [j (range *world-size*)]
                 [i j]))
      h-score (mapmat heuristic-cost-estimate coords)
      start [0 0]
      goal-y *world-size*
      goal-x goal-y
      goal [goal-x goal-y]]
  (loop [closedset #{}
         openset #{[0 0]}
         came-from (constant-matrix [nil nil])
         g-score (assoc-in (constant-matrix 1e8) start 0)
         f-score (mapmats + g-score h-score)
         ]

    (let [
          current (min-by f-score openset)
          openset (disj openset current)
          closedset (conj closedset current)
          nbrs (filter (complement closedset)
                       (neighbors current))
          openset (union openset (set nbrs))
          ]
      (if (empty? openset)
        (if (= current goal)
          came-from
          false)
        (let [[came-from g-score f-score]
              (reduce (fn [[cf gs fs :as to-be-updated] nbr]
                        (let [current-g-score (get-in g-score current)
                              nbr-g-score (get-in g-score nbr)
                              nbr-cost (get-in world nbr)
                              tentative-g-score (+ current-g-score
                                                   nbr-cost)]
                          (if (>= tentative-g-score nbr-g-score)
                            to-be-updated
                            [(assoc-in cf nbr current)
                             (assoc-in gs nbr tentative-g-score  )
                             (assoc-in fs nbr (+ tentative-g-score
                                                 (get-in h-score nbr)))])))
                      [came-from g-score f-score]
                      nbrs)]
          (epprints nbrs
                    openset
                    came-from
                    g-score
                    f-score)
          (recur closedset openset came-from g-score f-score))))))

I got some strange error when loading this into the repl:

CompilerException java.lang.IllegalArgumentException: Key must be integer, compiling:(/Users/kaiyin/personal_config_bin_files/workspace/typedclj/src/typedclj/rhizome.clj:70:48) 

What did I do wrong?

Here is the pseudocode that I used (from wikipedia):

function A*(start,goal)
    closedset := the empty set    // The set of nodes already evaluated.
    openset := {start}    // The set of tentative nodes to be evaluated, initially containing the start node
    came_from := the empty map    // The map of navigated nodes.


g_score[start] := 0    // Cost from start along best known path.
// Estimated total cost from start to goal through y.
f_score[start] := g_score[start] + heuristic_cost_estimate(start, goal)

while openset is not empty
    current := the node in openset having the lowest f_score[] value
    if current = goal
        return reconstruct_path(came_from, goal)

    remove current from openset
    add current to closedset
    for each neighbor in neighbor_nodes(current)
        if neighbor in closedset
            continue
        tentative_g_score := g_score[current] + dist_between(current,neighbor)

        if neighbor not in openset or tentative_g_score < g_score[neighbor] 
            came_from[neighbor] := current
            g_score[neighbor] := tentative_g_score
            f_score[neighbor] := g_score[neighbor] + heuristic_cost_estimate(neighbor, goal)
            if neighbor not in openset
                add neighbor to openset

return failure

by qed at May 23, 2015 11:47 PM

/r/scala

[Hiring] Scala developer needed- Part-time (remote)

We are looking for an experienced scala devloper to work on a project part-time. Here are some of what we want to implement: 1. Work on a webservice with Spary.io, akka-http or playframework 2. Use cassandra db on the backend with Scala 3. Oauth integration with FB, LinkedIn and Twitter for specific use cases

Please email me at yemi[at]hamoye [dot] com if you're interested or can recommend someone else.

submitted by Hamoye
[link] [1 comment]

May 23, 2015 11:23 PM

StackOverflow

What's a good Scala idiomatic approach to rule-based validation

Is there an idiomatic solution to applying a series of business rules, for example, from an incoming JSON request. The "traditional" Java approach is very if-then intense, and Scala must offer a far better solution.

I've experimented a bit with pattern matching but haven't really come up with a pattern that works well. (Invariably, I end up with absurdly nested match statements)...

Here's an absurdly simple example of what I'm trying to do:

if (dateTime.isDefined) {
    if (d == None)
        // valid, continue
    if (d.getMillis > new DateTime().getMillis)
        // invalid, fail w/ date format message
    else
    if (d.getMillis < new DateTime(1970).getMillis)
        // invalid, fail w/ date format message
    else
        // valid, continue
} else
    // valid, continue

if (nextItem.isDefined) {
    // ...
}

I'm thinking perhaps an approach that uses a series of chained Try()... but it seems like this pattern must exist out there already.

by Zac at May 23, 2015 10:40 PM

CompsciOverflow

Show that regular languages are closed under Mix operations

Let $L_1, L_2$, two regular languages and the operations:

$$Mix_1(L_1, L_2) =\{ a_1b_1a_2b_2\ldots a_nb_n | n\ge 0 \land a_1,a_2,\ldots ,a_n,b_1,b_2,\ldots ,b_n\in\Sigma\\ \land a_1a_2\ldots a_n\in L_1 , b_1b_2\ldots b_n\in L_2\}$$

$$Mix_2(L_1, L_2) = \{ x_1y_1x_2y_2\ldots x_ny_n | n\ge 0 \land x_1,x_2,\ldots,x_n,y_1,y_2,\ldots,y_n\in\Sigma^* , x_1x_2\ldots x_n\in L_1 \land y_1y_2\ldots y_n\in L_2\}$$

Prove that regular languages are closed under $Mix_1$ and $Mix_2$ operations.

So for the first operation:
Let $D_1, D_2$, two DFA's accepting $L_1, L_2$, respectively. We define $D = (Q,\Sigma, \delta, q_0, F)$.

Where $Q = Q_1 \cup Q_2$. The transition function, $\delta$ will behave as $\delta_1$ and $\delta_2$. But in addition, Let's define $n = \left|\Sigma\right|$, then we make $n$ transitions from every $q_i\in Q_1$ to $q_j\in Q_2$ (and vice versa) for every $\sigma \in \Sigma$.

So it's pretty easy to see that $D$ accepts $Mix_1(L_1, L_2)$.

Question:
How can I show that regular languages are closed under $Mix_2$?

by Elimination at May 23, 2015 10:39 PM

StackOverflow

Using SBT and Maven in single project?

  • I am migrating my java application which is a single maven module (contains pom.xml) to akka.
  • I am new to akka(and typesafe ecosystem), but this is what I plan of doing
ApplicationActor
       | 
 ExistingProjectActor

where

  • ApplicationActor is based on sbt (and is Supervisor)
  • ExistingProjectActor is current project with pom.xml (and is Child Actor)

Questions

  • Is it possible to use sbt as main build tool but for legacy purpose also include ExistingProjectActor (with pom.xml)?

by daydreamer at May 23, 2015 10:33 PM

How to use figwheel with a ring-handler that's a component?

I'd like to use figwheel to reload the frontend of an all-clojure project I'm playing with.

The backend serves a REST api and is organized as a bunch of components from which I create a system in my main function (I use duct to create the handler component). I want to pass state to my handlers using closures, but the only means of configuring figwheel to use my handler seems to be setting the ring-handler key in project.clj, and this requires that I pass a handler that is defined in a namespace at lein startup time.

So - is there a way to configure figwheel when I am doing my component startup? I'm still very new at Closure so it's likely I'm missing something in plain sight.

Passing state as parameter to a ring handler? is a similar question, but the answer there involves binding the handler a var at the top-level of a namespace, which I'm trying to avoid.

by Tom Dunham at May 23, 2015 10:25 PM

TheoryOverflow

is "spaghetti sort" really O(n) (even as a thought experiment) ?

I`m referring to the notion described here: http://en.wikipedia.org/wiki/Spaghetti_sort

In the analysis section the author admits that considering it to be O(n) requires the assumption that the act of identifying and then removing the longest spaghetti rod which has stopped your hand from descending further is an O(1) rather than an O(n) or O(logn) operation - this seems to me a rather unreasonable assumption especially if many rodes of relatively simillar height exist. Alternatively it seems equivalent to assuming that one can pick the longest rode from a heap of rodes arbitrarily spread on a table at O(1) without going through the "leveling" procedure at all.

by mmmspaghetti at May 23, 2015 10:10 PM

StackOverflow

How to reduce() on a collection keeping the collection itself in Scala?

I just need to reduce the elements in a collection but I'd like to keep the collection in the result.

scala> List("a","b","c").reduce(_+_)
res0: String = abc

I'd like to get

scala> List("a","b","c").someSortOfReduce(_+_)
res0: List[String] = List(abc)

scala> Seq("a","b","c").someSortOfReduce(_+_)
res1: Seq[String] = Seq(abc)

by Max at May 23, 2015 10:07 PM

Lobsters

StackOverflow

how does macro-generating macro work in clojure

Here is a macro-generating macro I learned from #clojure channel:

(defmacro import-alias
  [new-name imported]
  `(defmacro ~new-name [f# & body#]
     `(. ~'~imported ~f# ~@body#)))
(pprint 
  (macroexpand-1 
    '(import-alias J Math)))

This expands to:

(clojure.core/defmacro
 J
 [f__36239__auto__ & body__36240__auto__]
 (clojure.core/seq
  (clojure.core/concat
   (clojure.core/list '.)
   (clojure.core/list 'Math)
   (clojure.core/list f__36239__auto__)
   body__36240__auto__)))

How come we get single-element lists concatenated together? I know this is very tricky wizardry and I wouldn't do it in practice, but I am intrigued nonetheless, how does it actually work?

If I just change the position of one quote, then the expansion is quite different:

(defmacro import-alias
  [new-name imported]
  `(defmacro ~new-name [f# & body#]
     `(. ~~'imported ~f# ~@body#)))
(pprint
  (macroexpand-1
    '(import-alias J Math)))

;;; expand:
(clojure.core/defmacro
 J
 [f__36269__auto__ & body__36270__auto__]
 (clojure.core/seq
  (clojure.core/concat
   (clojure.core/list '.)
   (clojure.core/list imported)
   (clojure.core/list f__36269__auto__)
   body__36270__auto__)))

and:

(defmacro import-alias
  [new-name imported]
  `(defmacro ~new-name [f# & body#]
     `(. '~~imported ~f# ~@body#)))
(pprint
  (macroexpand-1
    '(import-alias J Math)))

;; expand:
(clojure.core/defmacro
 J
 [f__36323__auto__ & body__36324__auto__]
 (clojure.core/seq
  (clojure.core/concat
   (clojure.core/list '.)
   (clojure.core/list
    (clojure.core/seq
     (clojure.core/concat
      (clojure.core/list 'quote)
      (clojure.core/list Math))))
   (clojure.core/list f__36323__auto__)
   body__36324__auto__)))

Why?

by qed at May 23, 2015 09:45 PM

/r/compsci

Naming of Graphs

I've been doing data structures in java, I'd like to know why graphs are named graphs and not maps since they kind of look like maps and they're used for maps (like network flows) and stuff. They could name the map a unictionary since it can only have unique keys

submitted by moEazzy
[link] [3 comments]

May 23, 2015 09:42 PM

/r/emacs

How to better indent code in sx?

I'm using sx for basically everything I used to do in Stackexchange with their web interface.

Problem: I find it difficult to paste/write code snippets and have them indented properly.

Is there any trick I can use to overcome this problem?

submitted by shackra
[link] [4 comments]

May 23, 2015 09:33 PM

CompsciOverflow

Intuition behind F-algebra

I looked at here for getting an intuition about F-algebra, but I am still left with some questions.

Suppose I have a group signature as $\Sigma= (* : X \times X \rightarrow X, \thicksim: X \rightarrow X , e : \rightarrow X)$, with the following axioms in a unuiversal algebraic way:

  1. $x ∗ (y ∗ z) = (x ∗ y) ∗ z$ (Associativity)
  2. $e ∗ x = x = x ∗ e$ (Identity element)
  3. $x ∗ (\thicksim x) = e = (\thicksim x) ∗ x$ (Inverse element)

A model of the above signature is an assignment of two functions to its function symbols, and a constant to its constant symbol, such that the above three laws hold.

My Question:

How the above structure with three axioms, can be encoded (represented) in an F-algebraic notion:

1) What is my endo-Functor F and why is that?

2) How these three laws are represented in F-algebra?

p.s.: I would appreciate if anybody refer to a textbook, or a document that I can read more examples to further understand the F-algebra concept.

by qartal at May 23, 2015 09:22 PM

/r/clojure

Using Clojure with JavaFX and making an uberjar

I had a little program written in Clojure which used swing components for its UI, then I learned about JavaFX and thought its cool. So I changed my program to use JavaFX instead.

It went all well until I tried to make an uberjar, which caused the compiler to hang, or spit errors about JavaFX toolkit not being initialized. I searched for samples online but none of those could compile to a uberjar either, even the simplest hello world app.

Anyone can help with this? I'm using JavaFX 2.2

submitted by farzadbekran
[link] [7 comments]

May 23, 2015 09:18 PM

StackOverflow

IllegalStateException in nested quote and unquote

Here is an example from joy of clojure:

(let [x 9, y '(- x)]
  (println `y)
  (println ``y)
  (println ``~y)
  (println ``~~y))

Output from repl:

typedclj.macros/y
(quote typedclj.macros/y)
typedclj.macros/y
(- x)

If I rearrange the order of quote/unquote a bit, results are still the same (I am wondering why):

(let [x 9, y '(- x)]
  (println `y)
  (println ``y)
  (println `~`y)
  (println `~`~y))

But if I put the tilde in front:

(let [x 9, y '(- x)]
  (println `y)
  (println ``y)
  (println `~`y)
  (println ~``~y))

I get a strange error:

CompilerException java.lang.IllegalStateException: Attempting to call unbound fn: #'clojure.core/unquote, compiling:(/Users/kaiyin/personal_config_bin_files/workspace/typedclj/src/typedclj/macros.clj:1:25) 

Why do I get this error?

by qed at May 23, 2015 09:17 PM

CompsciOverflow

Is a subgraph either a spanning subgraph or a full subgraph?

A graph $G' = (N' ,A')$ is a spanning subgraph of a graph $G = (N, A)$ iff $N ' = N$ and $A' \subseteq A$.

A graph $G' = (N',A')$ is a full subgraph of a graph $G = (N, A)$ iff $N' \subseteq N$ and $A' = \{(x,y) \in A \mid x,y \in N '\}$.

  • I have to prove that "If $G'$ is a subgraph of a graph $G$, then either G' is a spanning subgraph of $G$, or $G'$ is a full subgraph of $G$. If this statement is valid, provide a proof that it is valid, otherwise provide a counterexample" but isn't a full subgraph actually a spanning subgraph?

I mean that any subgraph of $G$ is a full subgraph in the end, right? Any ideas how I can prove the statement?

  • My second question is how to calculate the number of spanning subgraphs of $G$ as a function of the number of nodes in $N$. How can I do this without knowing $A$ or how many edges I have in the graph?

by KeykoYume at May 23, 2015 09:16 PM

StackOverflow

for vs map in functional programming

I am learning functional programming using scala. In general I notice that for loops are not much used in functional programs instead they use map.

Questions

  1. What are the advantages of using map over for loop in terms of performance, readablity etc ?

  2. What is the intention of bringing in a map function when it can be achieved using loop ?

Program 1: Using For loop

val num = 1 to 1000
val another = 1000 to 2000
for ( i <- num )
{
  for ( j <- another) 
  {
    println(i,j)
  }
}

Program 2 : Using map

val num = 1 to 1000
val another = 1000 to 2000
val mapper = num.map(x => another.map(y => (x,y))).flatten
mapper.map(x=>println(x))

Both program 1 and program 2 does the same thing.

by Knight71 at May 23, 2015 09:15 PM

How to make Scala Action call generic (passing classes as types)

I have a lot of annoyingly almost-identical function calls to handle incoming Actions in my Play application:

val checkForConnection = GPAuthenticatedAction(parse.json) { implicit request =>
    request.body.validate[CheckForConnectionRequest] match {
        case JsSuccess(request, _) => {
            val (status, response) = performCheckForConnection(request)
            status(Json.toJson(response))
        }

        case e: JsError => NotAcceptable(Json.toJson(GPError.MalformedJSON.value(e.toString + " REQUEST: " + request.body.toString)))
    }
}

Everything between the request.body.validate and the end of the function is exactly the same, except for the type we validate (CheckForConnectionRequest) and the function being called to execute the request (performCheckForConnection).

Naturally I want to turn these repetitive chunks of code into a one liner, such as:

val checkForConnection = action(CheckForConnectionRequest, performCheckForConnection)

But I'm having trouble with the Scala syntax to make that work. This is what I've got so far. It does not come anywhere close to compiling:

def action[A: Format, B: Format](request: A, f:(A) => (Status, Format[B])) = GPAuthenticatedAction(parse.json){ implicit request =>
    request.body.validate[A] match {
        case JsSuccess(request, _) => {
            val (status, response) = f(request)
            status(Json.toJson(response))
        }

        case e: JsError => NotAcceptable(Json.toJson(GPError.MalformedJSON.value(e.toString + " REQUEST: " + request.body.toString)))
    }
}
val checkForConnection = action(CheckForConnectionRequest, performCheckForConnection)

The compiler errors are:

[error] /Users/zbeckman/Projects/Glimpulse/Server/project/glimpulse-server/app/controllers/GPFriendService.scala:76: Cannot write an instance of play.api.libs.iteratee.Enumeratee[B,play.api.libs.json.JsValue] to HTTP response. Try to define a Writeable[play.api.libs.iteratee.Enumeratee[B,play.api.libs.json.JsValue]]
[error]                 status(Json.toJson(response))
[error]                       ^
[error] /Users/zbeckman/Projects/Glimpulse/Server/project/glimpulse-server/app/controllers/GPFriendService.scala:82: type mismatch;
[error]  found   : (controllers.GPFriendService.Status, controllers.GPFriendService.CheckForConnectionResponse)
[error]  required: (controllers.GPFriendService.Status, play.api.libs.json.Format[?])
[error]     val checkForConn = action(CheckForConnectionRequest, performCheckForConnection)
[error]                                                          ^
[error] /Users/zbeckman/Projects/Glimpulse/Server/project/glimpulse-server/app/controllers/GPFriendService.scala:82: No Json formatter found for type controllers.GPFriendService.CheckForConnectionRequest.type. Try to implement an implicit Format for this type.
[error]     val checkForConn = action(CheckForConnectionRequest, performCheckForConnection)
[error]                              ^
[error] three errors found

by Zac at May 23, 2015 09:12 PM

Multiply collection and randomly merge with other - Apache Spark

I am given two collections(RDDs). Let's say and a number of samples

val v = sc.parallelize(List("a", "b", "c"))
val a = sc.parallelize(List(1, 2, 3, 4, 5))

val samplesCount = 2

I want to create two collections(samples) consisting of pairs where one value is from the 'v' and second one from 'a'. Each collection must consist all values from v and random values from 'a'.

Example result would be:

(
 (("a", 3), ("b", 5), ("c", 1)), 
 (("a", 4), ("b", 2), ("c", 5))
)

One more to add is that the values from v or a can't repeat within a sample.

I can't think of any good way to achieve this.

by Dawid Wysakowicz at May 23, 2015 09:12 PM

CompsciOverflow

Local Cache miss using MESI Coherence Protocol

My notes on MESI state that there are a few courses of action when we experience a local cache miss on read, depending on the global state of the data (whether other copies already exist, and the state they exist in)
I'm having some trouble understanding how we know the state of the data when we miss on a read.
As far as I'm aware, MESI data is attached to each cache line, so if we have do not have a copy of the required data (we missed on the read) then how do we know the relevant MESI data, so we can make the decision on how to react to the miss? (whether we should read from memory, check for shared/modified copies etc)

by Sammdahamm at May 23, 2015 09:11 PM

/r/clojure

StackOverflow

Serialize map[string, Any] with net-lift-json library to custom json

I m try to serialize map[string, demo] to custom json using net-lift-json library.

here demo is case class:

case class demo(id:String,type:String)

I created json from values of map with following code:

 pretty(render(sjson)

which looks like:

{"p":{"id":"1","type":"simple"},"g":{"id":"2","type":"simple"}}

But while creating above Json is there any way to create with different string fields of the case class?

I mean like this:

{{"myID":"1","myType":"simple"},{"myID":"2","myType":"simple"}}

Could someone suggest any example or documentation available to handle this task?

by DSKVP at May 23, 2015 09:02 PM

Fefe

Ein Mitarbeiter der Bank of England hat die geheimen ...

Ein Mitarbeiter der Bank of England hat die geheimen Brexit-Pläne versehentlich an den Guardian gemailt. Inklusive PR-Anweisungen, wie man die Existenz dieses Plans am besten leugnet. Einmal mit Profis arbeiten!

Update: Freudscher Vertipper korrigiert.

May 23, 2015 09:01 PM

StackOverflow

Can I force scala to error on an inexhaustive match?

An inexhaustive match such as

def foo[A](t: Seq[A]) = t match {
    Seq(x) => x
}

is often (not always, but usually) a mistake on my part that will crash at runtime. Scala warns, but in an incremental build, the file might already be compiled so I will miss the warning.

Is there a way, either globally or locally, perhaps by an annotation, to force scala to turn the warning into an error?

by Owen at May 23, 2015 09:00 PM

CompsciOverflow

union of two equivalence classes (Myhill–Nerode theorem)

Let a language, $L$ such that the equivalence relation, as defined in Myhill–Nerode theorem has $4$ equivalence classes; $A_1, \ldots, A_4$.

Let $S = A_1 \cup A_2$.

  1. Is $S$ always regular?
  2. How many equivalence classes does the relation $\sim S$ creates?
  1. I think we may construct two DFA's, $D_1, D_2$ which accept $A_1, A_2$, respectively. Let $D$ to be the DFA accepting $L$. We construct $D_1$ by copying the states of $D$. We choose arbitrary $w\in A_1$ and run it on $D$. We mark as accepting state only the state which accepted the $w$. Same applied for $D_2$. Finally, we unite $D_1$ and $D_2$ be connecting a new starting state and two $\varepsilon$ arrows to each DFA. So we constructed in that way an NFA and therefore $S$ is a regular language. Is this construction legal?
  2. My thought is $3$ because every word can be either at $S, A_3, A_4$. Am I right?

by Elimination at May 23, 2015 08:59 PM

Lobsters

CompsciOverflow

Ford-Fulkerson Running Time

This question might be really basic but every source seems to skip over a couple of steps neither of which seem trivial to me. It would be great if someone could explain them!

In the analysis of Ford-Fulkerson I understand why the while loop runs no more than $val(f^*)$ times but I don't see why it only takes $O(E)$ time to find an augmenting path. Using a BFS or DFS would give $O(V+E)$ no? The running time I see everywhere says $O(E\cdot val(f^*))$

Also, given $C$, the maximum capacity of any edge in the network, I see a lot of people stating the bound $val(f^*)\leq nC$ where $n$ seems to be $|V|$. However it isn't clear why this relation holds. $val(f^*)\leq C|E|$ makes more sense to me but never seems to be used...

by Bridgo at May 23, 2015 08:58 PM

Unambiguous but nondeterministic context-free language?

Whenever deterministic context-free languages are discussed, the webpage/textbook would always give a side note saying that although deterministic context-free languages are never ambiguous, unambiguous context-free languages may still be nondeterministic.

However, they never give an example. Is there a short, simple example of a context-free language that is

  • unambiguous
  • but nondeterministic

by user54609 at May 23, 2015 08:35 PM

StackOverflow

what OpenJDK release does contain Oracle CPU?

I'd like to use OpenJDK 8 which contain the latest Oracle CPU issued on April: Oracle Critical Patch Update Advisory - April 2015

For instance, the latest JDK8 release is 8u40. Does it contain the latest CPU?

by Alex at May 23, 2015 08:19 PM

Wes Felter

"Consulting service: you bring your big data problems to me, I say “your data set fits in..."

“Consulting service: you bring your big data problems to me, I say “your data set fits in RAM”, you pay me $10,000 for saving you $500,000.”

- Gary Bernhardt

May 23, 2015 08:13 PM

/r/compsci

what requires more energy/power the human brain, or a supercomputer capable of 20 petaflops per second?

For example lets say someone is running a program to recognize a persons mood from a photograph.

lets say the the photo is of an awkward interaction, so the computer had and this photo is not indexed, only thing the computer is given is the pixel dimensions of the person in the photo graph. And the computer system has millions of other photos that it has indexed according to known body and facial communication. The super computer would then run this persons image against all of these indexes and choose the most probable "feeling".

Vs. a guy/girl looking at a picture.

keep in mind when I say the energy a human takes for example the calories expended in looking at a photograph and whatever empathy process the brain does. These calories should include all energy from the sunlight needed to create these calories, and energy used in farming.

submitted by TheBeardofGilgamesh
[link] [3 comments]

May 23, 2015 08:10 PM

StackOverflow

Scala how can I count the number of occurrences in a list

val list = List(1,2,4,2,4,7,3,2,4)

I want to implement it like this: list.count(2) (returns 3).

by Fugees at May 23, 2015 08:08 PM

CompsciOverflow

Number of ways to connect sets of $k$ vertices in a perfect $n$ -gon

This is a copy of my post at Mathexchange.com, as my question is still not fully answered and I really wanna find a solution to this. Feel free to refer to there for useful comments and partial solutions:

http://math.stackexchange.com/questions/1294224/number-of-ways-to-connect-sets-of-k-dots-in-a-perfect-n-gon

Let $Q(n,k)$ be the number of ways in which we can connect sets of $k$ vertices (dots), in a given perfect $n$-gon, such that no two lines intersect at the interior of the $n$-gon, and no vertice remains isolated.

Intersection of the lines outisde the $n$-gon is acceptable. Obviously, $k|n$, and $n$ can't be prime because otherwise there will be vertices/dots left unconnected. The $n$-gon itself is an acceptable solution to a connection of $n$ vertices, and in the case of $k>2$, these aren't lines, but a set of connected lines, a sort of a network formed by connected planar graphs with straight edges with $k$ vertices, which are required to be vertices of the $n$-gon itself.

There must always be $S =\frac nk$ sets of lines. For $k=2$, there are exactly $\frac nk$ lines, and for $k>2$, there are exactly $\frac nk$, not lines but sets of such connected planar graphs.

Take for example $Q(6,2)$. We have a perfect hexagon. By brute-forcing with pencil and paper, I found that there are 5 ways to connect sets of 2 vertices (dots) such that no two lines intersect inside the hexagon and no vertice remains unconnected. Hence, $Q(6,2) = 5$.

The following image depicts the case of $Q(6,2)$:

Q(6,2)

For generality I ask about any amount of $k$ dots, even though I've recently found the solution for $k=2$.

Now let's move one step further:

Let $U(n,k)$ be the number of unique ways to connect sets of $k$ dots in a perfect $n$-gon, such that no two lines intersect, and rotational symmetry is neglected, i.e, every possible arrangement is unique and can't be formed by rotating another arrangement in any way. $U(6,2)=2$. Note that $U(6,2)=2$ because the arrangements of the first line in the image are not unique, and can be formed by rotating one another. The same happens for the second line of arrangements. Hence $U(6,2)=2$.

I'm pretty clueless about both functions $U$ and $Q$, and I couldn't derive an algorithm or formula to any of them. Hence I'm posting this here.

I'm pretty sure there's a pure combinatorial approach to this problem, perhaps involving Polya's Enumeration Theorem (PET). Is there an elegant solution to these functions? Can they even be solved for $k>2$?

Any light shed on any of the functions will be very much appreciable, as I haven't been successful in deriving a formula for any of them. Also, both formulas and algorithms will be great!

I can program in Java and Mathematica.

Thanks a lot in advance.

EDIT - Temporary Solutions + Relevant questions AND progress

$$Q(n,2) = C_{n \over 2}\quad\text{where}\quad C_n = \frac{1}{n+1} {2n\choose n}$$ And $C_n$ denotes the $n$'th Catalan number.

Now let us denote $W(n) = U(n,2)$. Can you find a formula for $W(n)$? Perhaps a connection between $Q(n,2)$ and $W(n)$?

by Matan at May 23, 2015 08:02 PM

Fefe

Der US-Senat hat keine Mehrheit für die Patriot-Act-"Reform" ...

Der US-Senat hat keine Mehrheit für die Patriot-Act-"Reform" zusammengekriegt. Die hätte die Datenspeicherung von der NSA zu den Telcos verschoben, also eine Vorratsdatenspeicherung eingeführt. Das Unterhaus hatte schon zugestimmt, der Senat jetzt nicht. Das heißt, wenn sie sich jetzt nicht noch einigen, dass der Patriot Act insgesamt Ende des Monats ausläuft. Soweit wird es wahrscheinlich nicht kommen, aber es ist doch ein schönes Signal.

May 23, 2015 08:01 PM

/r/compsci

What does this question about languages mean?

I have came across this question in some materials provided for revision purposed by my lecturer:

http://i.imgur.com/dvl5Dxp.png?1

However I don't really understand what it means by "construct complement languages".

Also, what is the horizontal bar across the top of the equations?

Can somebody explain please?

submitted by 1475963987412365
[link] [4 comments]

May 23, 2015 07:56 PM

StackOverflow

Scala pickling with JSON list

I'm trying to "unpickle" JSON structures like the following with Scala-pickling :

{"id":1,"aList":[{"x":1}, {"x":2}]}

Sadly when unpickling with the following code:

import scala.pickling._, scala.pickling.Defaults._, json._

val jsonString="""{"id":1,"aList":[{"x":1}, {"x":2}]}"""

case class X(id:Int,aList:List[Y])
case class Y(x:Int)

jsonString.unpickle[X]

I get the following exception:

scala.MatchError: [{"x" : 1.0}, {"x" : 2.0}] (of class scala.util.parsing.json.JSONArray)
at scala.pickling.json.JSONPickleReader$$anonfun$beginEntry$2.apply(JSONPickleFormat.scala:212)
at scala.pickling.json.JSONPickleReader$$anonfun$beginEntry$2.apply(JSONPickleFormat.scala:203)
at scala.pickling.PickleTools$class.withHints(Tools.scala:521)
at scala.pickling.json.JSONPickleReader.withHints(JSONPickleFormat.scala:170)
at scala.pickling.json.JSONPickleReader.beginEntry(JSONPickleFormat.scala:203)

Is it possible to use Scala-pickling with lists/sets?

by arsenio at May 23, 2015 07:51 PM

/r/compsci

/r/netsec

StackOverflow

spark schema rdd to RDD

I would like to do word count in spark , I created a rdd using spark sql to extract distinct tweets from data set. I would like to use split function on top of RDD but its not allowing me to do so.

Error:- valuse split is not a member of org.apache.spark.sql.SchemaRdd

Spark Code that doesn't work to do word count:-

val disitnct_tweets=hiveCtx.sql("select distinct(text) from tweets_table where text <> ''")
val distinct_tweets_List=sc.parallelize(List(distinct_tweets))

//tried split on both the rdd disnt worked

distinct_tweets.flatmap(line => line.split(" ")).map(word => (word,1)).reduceByKey(_+_)

distinct_tweets_List.flatmap(line => line.split(" ")).map(word => (word,1)).reduceByKey(_+_)

But when I output the data from sparksql to a file and load it again and run split it works.

Example Code that works:-

val distinct_tweets=hiveCtx.sql("select dsitinct(text) from tweets_table where text <> ''")
val distinct_tweets_op=distinct_tweets.collect()
val rdd=sc.parallelize(distinct_tweets_op)
rdd.saveAsTextFile("/home/cloudera/bdp/op")
val textFile=sc.textFile("/home/cloudera/bdp/op/part-00000")
val counts=textFile.flatMap(line => line.split(" ")).map(word => (word,1)).reduceByKey(_+_)
counts.SaveAsTextFile("/home/cloudera/bdp/wordcount")

I need a answer instead of writing to file and loading again to do my split function is there a work around to make split function work

Thanks

by sri hari kali charan Tummala at May 23, 2015 07:33 PM

/r/emacs

Stock emacs tips

What are your favorite tips for working with stock emacs?

I'm moving to an environment where I am discouraged from installing third party packages and I've seen a lot cool packages here but not a lot of tips for working with stock emacs. Since almost nothing in my current config works I am looking for alternatives. Commands, small snippets of elisp, or pointers to other places welcome! Bonus points for things that emulate (maybe incompletely): expand-region, smex, multiple-cursors, auto-complete, yasnippet

submitted by reddit_uname
[link] [18 comments]

May 23, 2015 07:08 PM

Lobsters

QuantOverflow

Why is that a risk averse consumer buys the optimum insurance when there is actuarially fair insurance?

I think I understand the fact that when marginal utilities of the same function are equal (a consequence of the actuarially fair insurance), the independent variables in it must be equal -- right? But what it is the reason in this for a consumer being risk averse? What a $u''<0$ changes in comparison to a $u">0$ condition?

Edit: Example found here

"As a risk-averse consumer, you would want to choose a value of $x$ so as to maximize expected utility, i.e.

Given actuarially fair insurance, where $p = r$, you would solve: $\max \left[pu(w - px - L + x) + (1-p)u(w - px)\right]$, since in case of an accident, you total wealth would be $w$, less the loss suffered due to the accident, less the premium paid, and adding the amount received from the insurance company.

Differentiating with respect to $x$, and setting the result equal to zero, we get the first-order necessary condition as: $(1-p)pu'(w - px - L + x) - p(1-p)u'(w - px) = 0$,

which gives us: $u'(w - px - L + x) = u'(w - px)$

Risk-aversion implies $u'' < 0$, so that equality of the marginal utilities of wealth implies equality of the wealth levels, i.e.

$w - px - L + x = w - px$,

so we must have $x = L$.

So, given actuarially fair insurance, you would choose to fully insure your car. Since you're risk-averse, you'd aim to equalize your wealth across all circumstances - whether or not you have an accident.

However, if $p$ and $r$ are not equal, we will have $x < L$; you would under-insure. How much you'd underinsure would depend on the how much greater $r$ was than $p$."

Now, how the condition $u''<0$ changes anything to reach the result expressed above?

by John Doe at May 23, 2015 07:05 PM

CompsciOverflow

How much calculus do you actually use (not counting use of a graphing calculator)? [on hold]

I'm wondering if it's worth it to pursue some deep study into calculus. Already, I have some familiarity with derivatives, limits, and integrals. Though, I also have a graphing calculator that's helped me in high school and college a algebra.

by moonman239 at May 23, 2015 07:03 PM

StackOverflow

[Scala]properly reading an object from a file in the presence of type erasure

Let's say I have a map stored on disk and I should like to retrieve it:

type myType = Map[something , somethingElse]

...

try{
    val bytes = Files.readAllBytes(path)
    val is = new ObjectInputStream(new ByteArrayInputStream(bytes))
    val m = is.readObject().asInstanceOf[myType]
    Some(m)
}catch{
    case _:FileNotFoundException | _:IOException | _:ClassCastException => None
}

So far so good. However, as Maps are generic and due to the ever-annoying type erasure, I doubt I can conveniently rely on the ClassCastException to make sure that if I ever change myType, outdated maps will be discarded.

The thought has crossed my mind to simply hash myType and to retrieve and compare the hash prior to retrieving the map, but that feels more like a workaround than a solution. What would be the proper way to handle this?

Edit: The maps were stored to disk as follows:

var myMap : myType = ...

...

try{
    val b = new ByteArrayOutputStream()
    val os = new ObjectOutputStream(b)
    os.writeObject(myMap)
    Files.write(path, b.toByteArray)
}catch{
...
}

by User1291 at May 23, 2015 07:01 PM

Fefe

Irland hat per Referendum die Homo-Ehe eingeführt. ...

Irland hat per Referendum die Homo-Ehe eingeführt. Da sieht man mal, dass die Menschen häufig gar nicht so schlimm sind, wie es von außen aussieht.

May 23, 2015 07:01 PM

StackOverflow

How convert SCALA http fromURL data to Maps/Other DS to do data analysis on that

I have this program that very well returns me the JSON response. But i am wondering how can i access these JSON properties individually to do data analysis on these JSON objects.

val in = Source.fromURL("https://api.github.com/search/repositories?q=tetris")
val ouput = in.getLines //> ouput : List[String]

val output = in.getLines.ToList gives following output: //> ouput : List[String] =

List({"total_count":5487,"incomplete_results":false,
                                                              //| "items":[{"id":3477759,"name":"tetris","full_name":"tdd-elevator-training/te
                                                              //| tris","owner":{"login":"tdd-elevator-training","id":1227498,"avatar_url":"ht
                                                              //| tps://avatars.githubusercontent.com/u/1227498?v=3","gravatar_id":"","url":"h
                                                              //| ttps://api.github.com/users/tdd-elevator-training","html_url":"https://githu
                                                              //| b.com/tdd-elevator-training","followers_url":"https://api.github.com/users/t
                                                              //| dd-elevator-training/followers","following_url":"https://api.github.com/user
                                                              //| s/tdd-elevator-training/following{/other_user}","gists_url":"https://api.git
                                                              //| hub.com/users/tdd-elevator-training/gists{/gist_id}","starred_url":"https://
                                                              //| api.github.com/users/tdd-elevator-training/starred{/owner}{/repo}","subscrip
                                                              //| tions_url":"https://api.github.com/users/tdd-elevator-training/subscriptions
                                                              //| ","organizations_url":"https://api.github.com/users/tdd-elevator-training/or
                                                              //| gs","repos_url":"https:/
                                                              //| Output exceeds cutoff limit.

        I want to use this information, access each and every(even nested) json responses. How can i do that?

by Sahil Sharma at May 23, 2015 06:50 PM

Lobsters

StackOverflow

Scala connection pool library?

I'm trying to use Squeryl in a new Scala project. This is my first project in Scala, so I'm looking for a good Scala library to handle connection pooling. Of course I might as well use a Java library. What would be a best fit for SQueryl? Amongst java libraries I'm considering DBCP, C3P0, Proxool and BoneCP, being BoneCP a serious candidate looking at their benchmarks.

by DrKarl at May 23, 2015 06:31 PM

Planet Emacsen

Irreal: Mastering Emacs is Out

Mickey Peterson's new book, Mastering Emacs, is out.

I've already got mine. I haven't read it yet, obviously, but from my quick scan it looks really good.

by jcs at May 23, 2015 06:29 PM

CompsciOverflow

How important is it to find a deterministic polynomial time algorithm to construct Ramanujan graphs?

As in I don't know what is the difference between say the conferences SODA, STOC or FOCS. Measured in terms of such conferences, where would such a result be publishable?

This is not a "technical" question. I want to understand how importantly does the community view this question if hypothetically someone does this.


May be you can just comment and not write an "answer"

by user6818 at May 23, 2015 06:27 PM

StackOverflow

Undefined CSS File in a play2 project

I have downloaded angularjs eclipse plugin from market place in my eclipse kepler. I imported the maven project whose packaging is play2 and converted it to angularjs project but then it is showing following errors:

index.scala.html:

enter image description here

The error message:

Undefined CSS file 

Directory Structure:

enter image description here

by My God at May 23, 2015 06:04 PM

using regex in jinja 2 for ansible playbooks

HI i am new to jinja2 and trying to use regular expression as shown below

{% if ansible_hostname == 'uat' %}
   {% set server = 'thinkingmonster.com' %}

{% else %}
   {% set server = 'define yourself' %}
{% endif %}

{% if {{ server }} match('*thinking*') %}
  {% set ssl_certificate = 'akash' %}

{% elif {{ server }} match( '*sleeping*')%}
   {% set ssl_certificate = 'akashthakur' %}
{% endif %}

based on the value of "server" i would like to evaluate as which certificates to use. ie if domain contains "thinking" keyword then use these certificates and if it contains "sleeping" keyword then use that certificate.

But didn't found any jinja2 filter supporting this. Please help me.I found some python code and sure that can work but how to use python in jinja2 templates?

by thinkingmonster at May 23, 2015 05:59 PM

QuantOverflow

Need for Binomial Representation Theorem

In some texts (e.g. Baxter & Rennie, Shreve I) the binomial model is first constructed using the usual backward induction argument, and it is concluded that by no-arbitrage the time $t$ value of a claim with time $T$ payoff $X$ is $\mathbb{E}_\mathbb{Q}[\frac{B_t}{B_T} X|\mathcal{F}_t]$, where $B_t$ is the price of a cash bond at time $t$. In other words, because we determined the price of the claim at each step, the price of the portfolio that replicates that claim must be the claim's price at each step, else arbitrage. There is no mention of self-financing strategies (SFSs) or binomial representation theorem (BRT); rather, we explicitly construct a hedging strategy that replicates the claim's payoff.

Only after we have determined this price does it seems like the concept of SFSs are introduced, with the BRT invoked to prove the existence of them. Then we use a slightly different argument to arrive at the same price: the value of a SFS that replicates $X$ is $\mathbb{E}_\mathbb{Q}[\frac{B_t}{ B_T} X|\mathcal{F}_t]$ by the BRT, and because this is a SFS that replicates $X$ this must be the price of the claim, else arbitrage.

So we have two distinct approaches to arrive at the same conclusion. My question is, what purpose does the BRT serve in the binomial model? Does it just serve as an intuition builder for the martingale representation theorem (MRT) in continuous time models, where explicit construction of the hedging strategy isn't as clear?

If that's the case, it seems the BRT is specific to the binomial model, while the MRT is model-free. Is this correct?

by bcf at May 23, 2015 05:51 PM

/r/emacs

CompsciOverflow

Graph searching: Breadth-first vs. depth-first

When searching graphs, there are two easy algorithms: breadth-first and depth-first (Usually done by adding all adjactent graph nodes to a queue (breadth-first) or stack (depth-first)).

Now, are there any advantages of one over another?

The ones I could think of:

  • If you expect your data to be pretty far down inside the graph, depth-first might find it earlier, as you are going down into the deeper parts of the graph very fast.
  • Conversely, if you expect your data to be pretty far up in the graph, breadth-first might give the result earlier.

Is there anything I have missed or does it mostly come down to personal preference?

by malexmave at May 23, 2015 05:30 PM

StackOverflow

Proper way to make a Spark Fat Jar using SBT

I need a Fat Jar with Spark because I'm creating a custom node for Knime. Basically it's a self-contained jar executed inside Knime and I assume a Fat Jar is the only way to spawn a local Spark Job. Eventually we will go on submitting a job to a remote cluster but for now I need it to spawn this way.

That said, I made a Fat Jar using this: https://github.com/sbt/sbt-assembly

I made an empty sbt project, included Spark-core in the dependencies and assembled the Jar. I added it to the manifest of my custom Knime node and tried to spawn a simple job (pararellize a collection, collect it and print it). It starts but I get this error:

No configuration setting found for key 'akka.version'

I have no idea how to solve it.

Edit: this is my build.sbt

name := "SparkFatJar"

version := "1.0"

scalaVersion := "2.11.6"

libraryDependencies ++= Seq(
  "org.apache.spark" %% "spark-core" % "1.3.0"
)


libraryDependencies +=  "com.typesafe.akka" %% "akka-actor" % "2.3.8"

assemblyJarName in assembly := "SparkFatJar.jar"

assemblyMergeStrategy in assembly := {
  case PathList("META-INF", xs @ _*) => MergeStrategy.discard
  case x => MergeStrategy.first
}

I've found this mergestrategy for Spark somewhere on the internet but I can't find the source right now.

by Chobeat at May 23, 2015 05:19 PM

What is the intuition behind the checkerboard covering recursive algorithm and how does one get better at formulating such an algorithm?

You may have heard of the classic checkerboard covering puzzle. How do you cover a checkerboard that has one corner square missing, using L-shaped tiles?

There is a recursive approach to this as explained in the book "Python Algorithms Mastering Basic Algorithms in the Python Language."

The idea is to split the board into 4 smaller squares, then place the L-shaped tile into the center of larger board, effectively creating 4 smaller squares with one tile missing and continue via recursion.

Conceptually, it's easy to understand, but I find it very difficult to think about an implementation. Here's one implementation solution --

    def cover(board, lab=1, top=0, left=0, side=None):
        if side is None: side = len(board)

        # Side length 
        s = side // 2

        # Offsets for outer/inner squares of subboards
        offsets = ((0, -1), (side-1, 0))

        for dy_outer, dy_inner in offsets:
            for dx_outer, dx_inner in offsets:
            # If the outer corner is not set...
                if not board[top+dy_outer][left+dx_outer]:
                # ... label the inner corner: 
                    board[top+s+dy_inner][left+s+dx_inner] = lab


        # Next label: 
        lab += 1
        if s > 1:
            for dy in [0, s]:
                for dx in [0, s]:
                    # Recursive calls, if s is at least 2:
                    lab = cover(board, lab, top+dy, left+dx, s)

        # Return the next available label: 
        return lab

To Run the code, you get the following

    board = [[0]*8 for i in range(8)]
    board[7][7] = -1
    cover(board)
    for row in board:
        print((" %2i"*8)%tuple(row))

    3  3  4  4  8  8  9  9
    3  2  2  4  8  7  7  9
    5  2  6  6 10 10  7 11
    5  5  6  1  1 10 11 11
   13 13 14  1 18 18 19 19
   13 12 14 14 18 17 17 19
   15 12 12 16 20 17 21 21
   15 15 16 16 20 20 21 -1

It took me some time to understand this implementation. I'm not sure if I even completely understand it, especially the thought behind the offsets line. Can someone try to explain the implementation succinctly? How does one develop an intuition to think about a solution to problems of this type? I found the solution very clever, especially setting up the offsets line as they did. If someone could help me understand this and perhaps suggestions on how to become better, I would greatly appreciate it.

Thanks!

by yeehaw at May 23, 2015 05:08 PM

QuantOverflow

R package for portfolio

In the context of modern portfolio theory, one often wishes to minimise $\mathbf{w}^{\mathrm{{\scriptstyle T}}}\boldsymbol{\Sigma}\mathbf{w}$ subject to $\mathbf{w}^{T}\boldsymbol{\mu}=c_{1}$, $\left\Vert \mathbf{w}\right\Vert _{1}<c_{2}$ and $\mathbf{w}^{T}\mathbf{1}=1$. Is there an R function or package to do this?

Updates

Update 1: For example, there are packages such as fportfolio and quadprog which come up in my google searches. But which function would I use to solve this problem?

Update 2: I thought portfolio.optim R looked promising until I realised you couldn't supply your own estimator of $\boldsymbol{\mu}$

by Antonius Gavin at May 23, 2015 05:04 PM

TheoryOverflow

Maximum computational power of a C implementation

If we go by the book (or any other version of the language specification if you prefer), how much computational power can a C implementation have?

Note that “C implementation” has a technical meaning: it is a particular instantiation of the C programming language specification where implementation-defined behavior is documented. A C implementation doesn't have to be able to run on an actual computer. It does have to implement the whole language, including every object having a bit-string representation and types having an implementation-defined size.

For the purpose of this question, there is no external storage. The only input/output you may perform is getchar (to read the program input) and putchar (to write the program output). Also any program that invokes undefined behavior is invalid: a valid program must have its behavior defined by the C specification plus the implementation's description of implementation-defined behaviors listed in appendix J (for C99). Note that calling library functions that are not mentioned in the standard is undefined behavior.

My initial reaction was that a C implementation is nothing more than a finite automaton, because it has a limit on the amount of addressable memory (you can't address more than sizeof(char*) * CHAR_BIT bits of storage, since distinct memory addresses must have distinct bit patterns when stored in a byte pointer).

However I think an implementation can do more than this. As far as I can tell, the standard imposes no limit on the depth of recursion. So you can make as many recursive function calls as you like, only all but a finite number of calls must use non-addressable (register) arguments. Thus a C implementation that allows arbitrary recursion and has no limit on the number of register objects can encode deterministic pushdown automata.

Is this correct? Can you find a more powerful C implementation? Does a Turing-complete C implementation exist?

by Gilles at May 23, 2015 05:03 PM

Lobsters

Twitter

Graduating Apache Parquet

ASF, the Apache Software Foundation, recently announced the graduation of Apache Parquet, a columnar storage format for the Apache Hadoop ecosystem. At Twitter, we’re excited to be a founding member of the project.

Apache Parquet is built to work across programming languages, processing frameworks, data models and query engines including Apache Hive, Apache Drill, Impala and Presto.

At Twitter, Parquet has helped us scale by reducing storage requirements by at least one-third on large datasets, as well as improving scan and deserialization time. This has translated into hardware savings and reduced latency for accessing data. Furthermore, Parquet’s integration with so many tools creates opportunities and flexibility for query engines to help optimize performance.

Since we announced Parquet, these open source communities have integrated the project: Apache Crunch, Apache Drill, Apache Hive, Apache Pig, Apache Spark, Apache Tajo, Kite, Impala, Presto and Scalding.

What’s new?

The Parquet community just released version 1.7.0 with several new features and bug fixes. This update includes:

  • A new filter API for Java and DSL for Scala that uses statistics metadata to filter large batches of records without reading them
  • A memory manager that will scale down memory consumption to help avoid crashes
  • Improved MR and Spark job startup time
  • Better support for evolving schemas with type promotion when reading
  • More logical types for storing dates, times, and more
  • Improved compatibility between Hive, Avro and other object models

As usual, this release also includes many other bug fixes. We’d like to thank the community for reporting these and contributing fixes. Parquet 1.7.0 is now available for download.

Future work

Although Parquet has graduated, there’s still plenty to do, and the Parquet community is planning some major changes to enable even more improvements.

First is updating the internals to work with the zero-copy read path in Hadoop, making reads even faster by not copying data into Parquet’s memory space. This will also enable Parquet to take advantage of HDFS read caching and should pave the way for significant performance improvements.

After moving to zero-copy reads, we plan to add a vectorized read API that will enable processing engines like Drill, Presto and Hive to save time by processing column data in batches before reconstructing records in memory, if at all.

We also plan to add more advanced statistics based record filtering to Parquet. Statistics based record filtering allows us to drop entire batches of data with only reading a small amount of metadata). For example, we’ll take advantage of dictionary encoded columns and apply filters to batches of data by examining a column’s dictionary, and in cases where no dictionary is available, we plan to store a bloom filter in the metadata.

Aside from performance, we’re working on adding POJO support in the Parquet Avro object model that works the same way Avro handles POJOs in avro-reflect. This will make it easier to use existing Java classes that aren’t based on one of the already-supported object models and enable applications that rely on avro-reflect to use Parquet as their data format.

Getting involved

Parquet is an independent open source project at the ASF. To get involved, join the community mailing lists and any of the community hangouts the project holds. We welcome everyone to participate to make Parquet better and look forward to working with you in the open.

Acknowledgements

We would like to thank Ryan Blue from Cloudera for helping craft parts of this post and the wider Parquet community for contributing to the project. Specifically, contributors from a number of organizations (Twitter, Netflix, Criteo, MaPR, Stripe, Cloudera, AmpLab) contributed to this release. We’d also like to thank these people: Daniel Weeks, Zhenxiao Luo, Nezih Yigitbasi, Tongjie Chen, Mickael Lacour, Jacques Nadeau, Jason Altekruse, Parth Chandra, Colin Marc (@colinmarc), Avi Bryant (@avibryant), Ryan Blue (@6d352b5d3028e4b), Marcel Kornacker, Nong Li (@nongli), Tom White (@tom_e_white), Sergio Pena, Matt Massie (@matt_massie), Tianshuo Deng, Julien Le Dem, Alex Levenson, Chris Aniszczyk and Lukas Nalezenec.

May 23, 2015 05:01 PM

Overcoming Bias

Elite Evaluator Rents

The elite evaluator story discussed in my last post is this: evaluators vary in the perceived average quality of the applicants they endorse. So applicants seek the highest ranked evaluator willing to endorse them. To keep their reputation, evaluators can’t consistently lie about the quality of those they evaluate. But evaluators can charge a price for their evaluations, and higher ranked evaluators can charge more. So evaluators who, for whatever reason, end up with a better pool of applicants can sustain that advantage and extract continued rents from it.

This is a concrete plausible story to explain the continued advantage of top schools, journals, and venture capitalists. On reflection, it is also a nice concrete story to help explain who resists prediction markets and why.

For example, within each organization, some “elites” are more respected and sought after as endorsers of organization projects. The better projects look first to get endorsement of elites, allowing those elites to sustain a consistently higher quality of projects that they endorse. And to extract higher rents from those who apply to them. If such an organization were instead to use prediction markets to rate projects, elite evaluators would lose such rents. So such elites naturally oppose prediction markets.

For a more concrete example, consider that in 2010 the movie industry successfully lobbied the US congress to outlaw the Hollywood Stock Exchange, a real money market just approved by the CFTC for predicting movie success. Hollywood is dominated by a few big studios. People with movie ideas go to these studios first with proposals, to gain a big studio endorsement, to be seen as higher quality. So top studios can skim the best ideas, and leave the rest to marginal studios. If people were instead to look to a prediction market to estimate movie quality, the value of a big studio endorsement would fall, as would the rents that big studios can extract for their endorsements. So studios have a reason to oppose prediction markets.

While I find this story as stated pretty persuasive, most economists won’t take it seriously until there is a precise formal model to illustrate it. So without further ado, let me present such a model. Math follows.

Let a unit quantity of applicants have quality x uniformly distributed over the range x in [0,1]. An evaluator i claims that its endorsed applicants have a quality of at least xi, and later suffers prohibitive penalties if such claims are ever found to be wrong. Thus an evaluator who chooses limit xi can actually only endorse applicants for whom x ≥ xi. There are N evaluators i in [1,N] who are endowed with different prior reputations that restrict their choice of limit xi. Evaluator i must choose xi, in [0,2i-N) because observers just won’t believe that they could attract applicants of quality x ≥ 2i-N.

An evaluator who charges price p to accurately endorse the set of applicants in the range [a,b] gains profit p*(a-b); evaluators have no other costs or revenue. Applicants who pay price p to to be endorsed as having quality x ≥ a gain net value V = a – p because of how they are treated by later observers. This value is not larger due to adverse selection in later observer process.

The order of play is as follows. First, evaluators choose sequentially in order of increasing index i. Each i chooses both price pi and quality xi simultaneously. After evaluators have chosen, then applicants, knowing all the pi  and xi and their own quality x, simultaneously choose an evaluator. Finally evaluators choose whether or not to endorse each of their applicants. (We get the same results if applicants don’t know their x, and can repeatedly apply to evaluators until one endorses them.) Let i=0 correspond to paying nothing and getting no endorsement, with x0 = p0 = 0.

A simple (and maybe unique) equilibrium of this game is: each evaluator i chooses  pi = xi =2i-N-1, each applicant applies to the highest i such that their x ≥ xi , and then all applicants are accepted. (Applicants with x< x1 “apply” for no endorsement and get it.) All applicants get exactly zero net value, and evaluator i endorses 2i-N-1 applicants, gaining profit 22(i-N-1).

Note that higher ranked evaluators endorse more applicants, and gain more profits. “Big” goes with “high”. And evaluators take all the gains in this world; applicants get nothing.

Proof: For xi,pi to beat offer xi-1,pi-1, need max pgiven xto satisfy xpi ≥ xi-1 pi-1, gives xpi = xi-1 pi-1 and p= pi-1 xi – xi-1. Assume xi = ci+(1-ci)(xi-1 pi-1). Gives the correct xN+1 = 1 with cN+1 = 1, and substituting these into profit πi =pi(xi+1 xi) gives πi =(xpi-1  – xi-1)(ci+1+(1-ci+1)(xi-1 pi-1) – xi). Maximizing πwith respect to xgives first order condition xi = ci+1/2 +(1-ci+1/2)(xi-1 pi-1), which confirms assumption with cci+1/2. Combined with cN+1 = 1 and x0 – p0 = 0 gives xi = pc= 2i-N-1.

by Robin Hanson at May 23, 2015 04:25 PM

StackOverflow

How convert html or URL output of Scala to JSON

val url = "http://api.hostip.info/get_json.php?ip=12.215.42.19"
val result = scala.io.Source.fromURL(url).mkString
println(result)

This gives me the complete HTML page. I want to access individual html elements of this web-page now to do some data analysis. In C# we did it using DYNAMIC variable and then putting html data into some class (json object).

How can we format this URL result to some classes for analysis?

Problem is to download HTML URL page, access its individual elements through Scala code.

by Sahil Sharma at May 23, 2015 04:16 PM

Lobsters

UnixOverflow

ZFS ACL (NFS4 ACL)

I'm using Openindiana (Solaris 10) containers and I want to let users upload web content over sftp I'v managed to setup internal-sftp of openssh and lock user under web root. All the files under web root should be owned by sftp user but at the same time web server should have read access to all these files. It works well by using ACL like

chmod -R A3+user:www:list_directory/read_data/execute:file_inherit/dir_inherit:allow htdocs/

But whenever user tries to chmod 777 directory to be web server writable the directory looses it's ACLs. Denying write_acl denies user to change even discretionary access control attributes

Ideally user should upload content by sftp, web server should have read access and full access to 777 directories Any idea on how to achieve that?

by Dmytro Leonenko at May 23, 2015 03:46 PM

QuantOverflow

Expected Utility and $\log$

I've just started reading about expected utility and utility functions and have the following question.

$\textbf{Question:}$ An investor has an initial wealth of 100 and a utility function of the form: \begin{align} U(w) = \log(w) \end{align} What is their expected utility?

\begin{align} \end{align}

Upon one of the slides I found on the web it states the following:

$\textbf{Calculating Expected Utility}$
1. When the choice variable $x$ is constant, then $E(U(x)) = U(x)$.
2. When the choice variable $x$ is a random variable, then $E(U(x))$ is driven by the PDF of $x$.
3. If $x$ has $k$ outcomes, each with probability $p_k$, then \begin{align} E(U(x)) = \sum_{1}^{k} p_i U(x_i) \end{align}

Since I'm told the initial wealth is 100 does this simply mean the expected utility is $E(U(100)) = \log(100)$?

Apologies if this is trivial - I'm just starting out.

All help is appreciated.
John

by John Smith at May 23, 2015 03:37 PM

Lobsters

StackOverflow

Meaning of ref@Ping in akka receive

I was surfing some code examples on akka and I found a particular example that I would like to be sure of the meaning:

def receive: Receive = { case original@Ping(x) => // do stuff case _ => //do stuff }

Ping is a case class used for message in the example. But What's the meaning of that original@ ? Is it the message sender? if so, is there any advantage of this approach over using the sender variable?

Sorry, but I can't give you the link because I can't find it anymore...

Not sure if this Akka thing or just a advanced Scala pattern matching feature that I wasn't aware..

by pedrorijo91 at May 23, 2015 03:33 PM

Planet Theory

TR15-085 | Polynomially Low Error PCPs with polyloglogn Queries via Modular Composition | Irit Dinur, Prahladh Harsha, Guy Kindler

We show that every language in NP has a PCP verifier that tosses $O(\log n)$ random coins, has perfect completeness, and a soundness error of at most $1/poly(n)$, while making at most $O(poly\log\log n)$ queries into a proof over an alphabet of size at most $n^{1/poly\log\log n}$. Previous constructions that obtain $1/\poly(n)$ soundness error used either $poly\log n $ queries or an exponential sized alphabet, i.e. of size $2^{n^c}$ for some $c>0$. Our result is an exponential improvement in both parameters simultaneously. Our result can be phrased as a polynomial-gap hardness for approximate CSPs with arity $poly\log\log n$ and alphabet size $n^{1/poly\log n}$. The ultimate goal, in this direction, would be to prove polynomial hardness for CSPs with constant arity and polynomial alphabet size (aka the sliding scale conjecture for inverse polynomial soundness error). Our construction is based on a modular generalization of previous PCP constructions in this parameter regime, which involves a composition theorem that uses an extra `consistency' query but maintains the inverse polynomial relation between the soundness error and the alphabet size. Our main technical/conceptual contribution is a new notion of soundness, which we refer to as distributional soundness, that replaces the previous notion of "list decoding soundness", and that allows us to prove a modular composition theorem with tighter parameters. This new notion of soundness allows us to invoke composition a super-constant number of times without incurring a blow-up in the soundness error.

May 23, 2015 03:24 PM

StackOverflow

Playframework does not start

i have been trying to install and create new application on playframework but i keep getting the message "killed" i tried with version 2.2.x

play 2.2.6 built with Scala 2.10.3 (running Java 1.7.0_79), http://www.playframework.com

The new application will be created in /etc/activator/play/survey-app

What is the application name? [survey-app]
> 

Which template do you want to use for this new application? 

  1             - Create a simple Scala application
  2             - Create a simple Java application

> 2
OK, application survey-app is created.

Have fun!

root@Webients:/etc/activator/play# cd survey-app/
root@Webients:/etc/activator/play/survey-app# play run
Getting org.scala-sbt sbt 0.13.0 ...
Killed

and also with version 2.3.x

root@Webients:~# activator new survey-app play-java

Fetching the latest list of templates...

OK, application "survey-app" is being created using the "play-java" template. 

Killed

tried to follow all installations recommended on the documentation and searched for a reason on many sites, but can't find a reason.

the machine i'm running is a private VPS with ubuntu 14.04 64-bit, 80GB disk, 128MB Memory, 64MB VSwap

by user2899382 at May 23, 2015 03:05 PM

QuantOverflow

Using Gordon's Growth Model to find value of corporation

This is a question posed to us by my professor in my finance class. I was under the impression that the Gordon Growth Model was used to find the intrinsic value of a stock, but I am unsure how to plug in these values and use it to find the value of this corporation.

The way I learned it was P=D/k-g, where P is the value of the stock, D is the expected dividend per share 1 year from now, k is the required rate of return on equity, and G is the dividend growth rate. What I don't understand is where I would use the values given in the problem in this model, since it's the value of the corporation and not a dividend. Any help would be much appreciated.

Question

Suppose Microsoft Corporation’s projected free cash flow for next year is FCF = $8.75 billion, and due to expected lower revenues from personal computers and slower growth of surface sales FCF is expected to grow at a constant rate of only 4.5% into the infinite future. The company’s weighted average cost of capital is 11.5%. Use the Gordon Growth Model to estimate the value of the corporation

by Bobby at May 23, 2015 03:00 PM

/r/compsci

Demonstrating Turing completeness

Please forgive me if this is the wrong subreddit. I'm a mathematician (Graph Theory, so I'm a neighbor to theoretical CS folks). I'm looking for way to show that a natural process is Turing Complete, (like the Magic:TG scenario).

Clearly if I can simulate a TM or Rule 110 I'm done. Conway's Life would be sufficient, too. I've also read that it can be done by simulating certain logic gates (like NAND), but I'm not sure what else is needed. Surely just demonstrating that a NAND gate is possible isn't enough to show Turing completeness. What else would I need to show?

Are there any other methods?

submitted by AerosolHubris
[link] [7 comments]

May 23, 2015 02:55 PM

StackOverflow

Scala: initializing vertexes of a graph

I have something like this:

def vertices: List[Vertex] = List()
def edges: List[Edge] = List()
def adjMatrix: Array[Array[Int]] = Array.ofDim[Int](vertices.size, vertices.size)
def addVertex(lbl: Char) = {
    val vertex = new Vertex(lbl)
     vertex :: vertices
     vertex
}
class Vertex(label: Char) {
  val visited: Boolean = false
  def echo() = print(label)
}

when I want to create nodes like this:

def main(args: Array[String]) {
    val node1 = g.addVertex('A')
    val node2 = g.addVertex('B')
    val node3 = g.addVertex('C')

    println(g.vertices.size) // the result is 0 ???!!!
}

I do not know why the list vertices isn't filled?

by Valerin at May 23, 2015 02:52 PM

Lobsters

CompsciOverflow

Use the pumping lemma to show that the language is not regular [on hold]

I am working on this problem : Use the pumping lemma to show that the language $\{0^n 1^{n} \mid n ≥ 1\}$ is not regular. May someone give me some suggestion about how to solve this problem?

by user3709122 at May 23, 2015 02:45 PM

StackOverflow

Scala Unicode Syntax

I know that these two are equivalent in Scala:

for {x <- xs} yield x
case Nil => println("foo")

Note the Unicode replacement for <- and =>:

for {x ← xs} yield x
case Nil ⇒ println("foo")

What is this feature called? I googled various combinations of "Scala Unicode Operators/Symbols" and did not find what I was looking for... What are the full list of equivalent symbols? Where is this documented in the Scala website? How do I use this practically? Through keymappers? How do I easily enable this in my IDE (IntelliJ) e.g. if I type =>, I want it to auto-correct it to in a .scala file for me. Is there an sbt plugin that does this for me maybe?

by wrick at May 23, 2015 02:42 PM

Determine remote device endpoint UDP Clojure CLR

Trying to perform the equivalent c# code in Clojure CLR

using System.Net;
IPEndPoint sender = new IPEndPoint(IPAddress.Any, 0);
EndPoint remote = (EndPoint) sender;
recv = sock.ReceiveFrom(data, ref remote);

What I've tried in Clojure that does not work:

(let [
      sender (IPEndPoint. (IPAddress/Any) 0)
      ^EndPoint remote ^EndPoint sender
      recv (.ReceiveFrom sock data (by-ref remote))
      ]
      (println (.ToString remote))
      ;; Do something with data...
 )

It just shows 0.0.0.0:0 I'm thinking the ref is not working but also not sure of the hint / cast syntax.

I've looked at here for info on ref https://github.com/richhickey/clojure-clr/wiki/CLR-Interop And here about specifying types: https://github.com/clojure/clojure-clr/wiki/Specifying-types

by casillic at May 23, 2015 02:30 PM

Custom write for DataTime in JSON Serializer

How do I get a custom behavior in the JSON Serializer for DateTime? My goal is to only serialize the year.

Here is my model:

case class Model(id: Option[Int], name: String, userID: String, date: DateTime, material: String, location: String, text: String, pathObject: Option[String], pathTexure: Option[String], pathThumbnail: Option[String])

object Model {
  implicit val tswrites: Writes[DateTime] = Writes { (dt: DateTime) => JsString(dt.year.get.toString) }

  implicit val modelWrites: Writes[Model] = (
    (JsPath \ "id").write[Option[Int]] and
    (JsPath \ "name").write[String] and
    (JsPath \ "userId").write[String] and
    (JsPath \ "date").write[DateTime] and
    (JsPath \ "material").write[String] and
    (JsPath \ "location").write[String] and
    (JsPath \ "text").write[String] and
    (JsPath \ "f1").write[Option[String]] and
    (JsPath \ "f2").write[Option[String]] and
    (JsPath \ "f3").write[Option[String]])(unlift(models.Model.unapply))
}

The date field serializes as 631148400000

The desired date field serialization as 1990

by Farmor at May 23, 2015 02:24 PM

CompsciOverflow

Prove/Disprove: $L_1, L_2 \in RE-R \implies L_1 \cup L_2 \notin R$

Prove/Disprove: $L_1, L_2 \in RE-R \implies L_1 \cup L_2 \notin R$

My first intuition is "Yes", since we may look at $M_1, M_2$ which accepts $L_1, L_2$, respectively. Then, WLOG there's $w$ such that $M_1$ doesn't halt for, and so the machine $M$ which runs $M_1, M_2$ in parallel, may not halt.

by Elimination at May 23, 2015 02:20 PM

Lobsters

CompsciOverflow

What happens when the words transfered on the bus are smaller than its width?

So what happens if we're transfering lots of 8 bit words in a 32 bit bus? Does each bus cycle only transfers 8 bit at the time, wasting the other 24 lines of the bus? Or does it transfer 4 words in each bus cycle?

by sedulam at May 23, 2015 01:55 PM

Planet FreeBSD

HotFix release to 10.1.2 – Now available

A minor hotfix update to the 10.1.2 ISO’s has been released today. This includes fixes to advanced installation using raidz, cache and log devices, as well as a fix to the text-installer when booted in UEFI mode. Users who have already installed 10.1.2 will not need to download, and can instead online-update to install any fixes.

Download Now

by Kris Moore at May 23, 2015 01:53 PM

Lobsters

CompsciOverflow

Soft real time operating system

i know the basic idea and difference between hard and soft rtos.

i wanted to know large scale and specific example of soft rtos. and i read in this site that even LINUX is a soft rtos.

is the performance degradation due to non-predictiblity of the the task the main advantage of this OS or are there some more disadvanatages?

also, what are its advantages over hard rtos? apart from its not that time strict?

by Steve at May 23, 2015 01:46 PM

TheoryOverflow

What's known about basing one-way function on the $P \neq NP$ assumption?

Is there a conditional impossibility result or the question is completely open?

by user34204 at May 23, 2015 01:39 PM

/r/compsci

StackOverflow

How to check whether Clojure code is being evaluated inside a REPL?

I would like to format my logs differently depending on whether my code is being run from a REPL or if I'm running the compiled jar.

Is there any simple way to do this? I was thinking maybe Leiningen leaves a trace somewhere when running the REPL.

by noziar at May 23, 2015 01:38 PM

Is this possible to implement laziness with circular dependencies in Scala?

This code causes Stackoverflow error:

lazy val leftChild = new Node(true, root, Seq(2), Seq())
lazy val rightChild = new Node(true, root, Seq(3), Seq())
lazy val root :Node = new Node(false, null, Seq(1), Seq(leftChild, rightChild))

where Node is defined as follows:

case class Node(isLeaf: Boolean, parent: Node,  keys: Seq[Int], pointers: Seq[Node])

One possible solution would be to resign from using case class. How to properly implement this structure using immutable states only? Is this possible with lazy and Node as case class?

by Łukasz Rzeszotarski at May 23, 2015 01:20 PM

Ungrouping a (key, list(values)) pair in Spark/Scala

I have data formatted in the following way:

DataRDD = [(String, List[String])]

The first string indicates the key and the list houses the values. Note that the number of values is different for each key (but is never zero). I am looking to map the RDD in such a way that there will be a key, value pair for each element in the list. To clarify this, imagine the whole RDD as the following list:

DataRDD = [(1, [a, b, c]), 
           (2, [d, e]),
           (3, [a, e, f])]

Then I would like the result to be:

DataKV  = [(1, a),
           (1, b),
           (1, c),
           (2, d),
           (2, e),
           (3, a),
           (3, e),
           (3, f)]

Consequently, I would like to return all combinations of keys which have identical values. This may be returned into a list for each key, even when there are no identical values:

DataID  = [(1, [3]),
           (2, [3]),
           (3, [1, 2])]

Since I'm fairly new to Spark and Scala I have yet to fully grasp their concepts, as such I hope any of you can help me. Even if it's just a part of this.

by Remy Kabel at May 23, 2015 01:02 PM

DragonFly BSD Digest

In Other BSDs for 2015/05/23

A calmer week, probably because of the U.S. holiday.

by Justin Sherrill at May 23, 2015 12:57 PM

TheoryOverflow

Merge action of binomial heaps amortized time

The merge action of binomial heaps, I believe, has O(lg n) worst-case running time.

http://en.wikipedia.org/wiki/Binomial_heap#Merge

But I'm having some trouble applying amortized analysis. My professor mentioned it's O(1) in class, but I cannot seem to prove it satisfactorily on my own. Any thoughts much appreciated!

by xxanderxu at May 23, 2015 12:35 PM

CompsciOverflow

what is correct maximum number of subnetworks?

1

2

ip: 10.5.2.1

subnetmask: 255.255.255.240

{10.5.2.1/28}

what is correct maximum subnetworks ????

what is different between classful subnet and CIDR subnet ???

by Dïnuth Perera at May 23, 2015 12:31 PM

Planet Emacsen

Emacs Redux: Mastering Emacs (the first Emacs book in over a decade) is out

Mickey Petersen just released Mastering Emacs, the first new book about our beloved editor, since Learning GNU Emacs(released way back in 2004).

I haven’t had the time to read the book yet, but being familiar with Mickey’s work I have no doubt it’s outstanding. That’s all from me for now - go buy the book and start mastering Emacs.

P.S.

I hope we won’t have to wait another decade for the next great Emacs book.

by Bozhidar Batsov at May 23, 2015 12:19 PM

/r/emacs

Fred Wilson

Video Of The Week: The New York Public Library Bitcoin Discussion

Last week we had a discussion at the New York Public Library about Bitcoin. Andrew Ross Sorkin moderated a discussion between Nathaniel Popper, author of Digital Gold, Bitcoin Chief Scientist Gavin Andresen, and me. Here it is:

by Fred Wilson at May 23, 2015 11:48 AM

TheoryOverflow

What exactly are Moore machines?

Ok, don't be scared by the title - it is not that I don't know the concept of a Moore machine, or basic FSM concepts in general. However, I think that the term "Moore machine", despite being frequently used in some areas, is generally poorly defined. I would be grateful if someone could point me to some (authoritative) source.

Background

The first and most prominent notion of a Moore machine I've come across is that a Moore machine is a certain kind of deterministic finite-state transducer. That is, given an input alphabet $\Sigma$ and an output alphabet $\Omega$, a Moore machine $M$ computes a function $\lambda_M \colon \Sigma^* \to \Omega^*$ such that for each input symbol that is consumed, a single output symbol is produced (this restriction is not always obeyed and, moreover, is not relevant to the question, but I will stick with it for simplicity). Thus, among other things, we have that for all $w \in \Sigma^*$, $|\lambda_M(w)| = |w|$, and in particular $\lambda_M(\varepsilon) = \varepsilon$. A Moore machine $M$ is generally defined as a tuple $M = \langle Q, \Sigma, q_o, \delta, \gamma\rangle$, where $\gamma \colon Q \to \Omega$ is the state output function.

There exists two interpretations of this definition. In the original one presented by Moore, the state output function of the current state determines the output that is emitted when an input symbol is read. Note that in this case, for a given Moore machine $M$, the first symbol of $\lambda_M(w)$, $w \neq \varepsilon$, is necessarily fixed to $\gamma(q_0)$. Note that when relying on this interpretation, there are Mealy machines for which no equivalent Moore machines exist. On the positive side, this interpretation admits a (minimal) canonical form.

In the second interpretation, the successor state determines the output that is emitted when an input symbol is read (i.e., when reading $\sigma\in\Sigma$ in state $q\in Q$, $\gamma(\delta(q, \sigma))$ is emitted). Then, every Mealy machine can be converted into an equivalent Moore machine (with potential blow-up by a factor of $|\Omega|$) and vice versa. However, then Moore machines do not admit a canonical form, due to the degree of freedom introduced by the fact that $\lambda_M$ does not constrain the output of the initial state, i.e., the value for $\gamma(q_0)$. For a more concrete example, consider the transducer with $\Sigma = \{a, b\}$ and $\Omega = \{x, y\}$ that outputs $x$ when reading $a$ and $y$ when reading $b$. The canonical Mealy machine has a single state with two loops with labels $a / x$ and $b / y$. A Moore machine (according to the second interpretation) needs two states, one with output $x$ and one with output $y$. $a$ loops on the $x$-state and $b$ loops on the $y$-state, and the state gets switched on all other inputs. Both states can be chosen as the initial state, and there is no natural choice.

I've also come across a different notion of Moore machines, namely as a generalization of DFA (i.e., where a DFA is a Moore machine with output alphabet $\Omega = \mathbb{B} = \{0, 1\}$ indicating acceptance). In this case, the value $\lambda_M(w)$ of the output function $\lambda_M \colon \Sigma^* \to \Omega$ is a single symbol only, which is determined by $\gamma(\delta(q_0, w))$. While this admits a canonical form, evaluating $\lambda_M(w)$ in general yields significantly less information than in the above case, but also yields information that is not observable in the transducer interpretation (i.e., for $\lambda_M(\varepsilon)$). These aspects are relevant, e.g., in the field of state identification (finding the current state in a given Machine by experimentation).

Questions

This brings me to my main two questions:

  1. What are the justifications for using Moore machines as transducers in the first place? The slight simplicity over Mealy machines comes at a high cost, namely either a reduced expressive power (1st interpretation) or the lack of a canonical form (2nd interpretation), and in either case a potentially increased size compared to a Mealy machine. Am I injust when claiming that for these reasons, the concept of a Moore machine should be avoided altogether in a strictly formal setting?
  2. Is there an established name for the above-mentioned "generalized DFA"? Clearly, the term Moore machine is to confusing, as it is commonly associated with transducers. I've come across TDFA for the ternary case (true, false, unknown/dontcare/maybe/...), but I am looking for a generalization beyond that. Maybe simply GDFA (generalized DFA)?

I'm looking forward to your input on this!

by misberner at May 23, 2015 11:32 AM

An algorithm for counting to Graham’s Number

I’m trying to come up with an algorithm that performs some action a Graham’s number of times on a machine with a reasonable amount of memory.

I thougth of the way to organize counter suitable for calculating $a{\uparrow\uparrow}b$, but got stuck at even smaller problem of counting to $2^a$, where $a$ is a 64-bit integer ($2^{21}$Tb needed for storage of such number is above what I assume reasonable).

Is there some clever technique to count beyond $2{\uparrow\uparrow}6$? Or is there any conceptual limitations on the counters with a polynomial-bounded memory?

by Oleg Stroganov at May 23, 2015 11:28 AM

UnixOverflow

What's the difference between the three FreeBSD versions?

What is the difference between the three FreeBSD versions (Current, Release and Stable)?

by zachron at May 23, 2015 11:06 AM

StackOverflow

Executing More than 3 Futures does not work

I m using dispatch library in my sbt project. When I initialize three future and run them it is working perfectly But I increase one more future then it goes to a loop.

My code:

    //Initializing Futures
    def sequenceOfFutures() ={

     var pageNumber: Int = 1
        var list ={Seq(Future{})}

        for (pageNumber <- 1 to 4) {
          list ++= {
            Seq(
              Future { 
              str= getRequestFunction(pageNumber); 
              GlobalObjects.sleep(Random.nextInt(1500)); 

              }
              )
          }
        }
    Future.sequence(list)
    }
Await.result(sequenceOfFutures, Duration.Inf)

And then getRequestionFunction(pageNumber) code:

def getRequestionFunction(pageNumber)={

  val h=Http("scala.org", as_str)

while(h.isComplete){

   Thread,sleep(1500);
}

}

I tried based on one suggestion from How to configure a fine tuned thread pool for futures?

I added this to my code:

import java.util.concurrent.Executors
import scala.concurrent._

implicit val ec = new ExecutionContext {
    val threadPool = Executors.newFixedThreadPool(1000);

    def execute(runnable: Runnable) {
        threadPool.submit(runnable)
    }

    def reportFailure(t: Throwable) {}
}// Still didn't work 

So when I use more than four Futures then it keeps await forever. Is there some solution to fix it?

But it didn't work Could someone please suggest how to solve this issue?

by Veerendra Kumar at May 23, 2015 10:46 AM

Creating Hash-Map Clojure

I have 2 list let say list of user (list-usr) and (usr-index), i want to create hash-map from these list much like

(def list-usr [196 186 244])
(def idx-usr  [0 1 2])

how can i form (hash-map {196 0 186 1 244 2}) from 2 list ?

by ByanJati at May 23, 2015 10:42 AM

/r/compsci

Is it possible to implement lower level language features in a higher level language?

For example, I'm guessing it would be impossible to implement a C compiler in Ruby (at best you could get an emulator).

But would it be possible, say, to implement static typing in a language that is dynamically typed.

Or could you implement something like C in Rust (similar language levels, so I guess that doesn't count)?

Or what about writing an assembly compiler in C? I suppose that would be impossible (?).

I suppose what you could do, is if you have a self-interpreting python interpreter (that is written in python), you could create bindings to the C language, and thus implement pointers, and or static typing into the language, in a way reintroducing those features after they have been removed.

Sorry if this may be too off topic for this subreddit.

submitted by tmtwd
[link] [13 comments]

May 23, 2015 10:35 AM

QuantOverflow

Black Scholes Formula, drift term

In the formula, the stock return is modelled as a brownian motion that is a drift + a stochastic term, ok I get that. But the drift term is then modelled as r - volatility ^ 2 / 2. I am not sure how they derive this "volatility ^ 2 / 2". Is this derived out of the Ito Lemma??

by Liam at May 23, 2015 10:21 AM

Lobsters

LogCabin: a distributed storage system built on Raft

LogCabin is a distributed storage system built on Raft that provides a small amount of highly replicated, consistent storage. It is a reliable place for other distributed systems to store their core metadata and is helpful in solving cluster management issues.

Comments

by jcspencer at May 23, 2015 10:20 AM

StackOverflow

Running tests in IntelliJ ClassNotFoundException

I tried many different run configs, but whatever I do I get this exception when running specs2 tests in IntelliJ for scala.

It always fails to find a class that ends with a $ sign. I checked - and there really is no such class file. There's AppControllerIT.class and lots of classes like AppControllerIT$innerFunctionOrclass.clas, but not AppControllerIT$.class

Any ideas?

Thanks!

com.haha.market.api.e2e.controllers.AppControllerIT$

java.lang.ClassNotFoundException: com.haha.market.api.e2e.controllers.AppControllerIT$


STACKTRACE
  java.net.URLClassLoader.findClass(URLClassLoader.java:381)
  java.lang.ClassLoader.loadClass(ClassLoader.java:424)
  sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
  java.lang.ClassLoader.loadClass(ClassLoader.java:357)
  org.specs2.reflect.Classes$$anonfun$loadClassEither$1.apply(Classes.scala:140)
  org.specs2.reflect.Classes$$anonfun$loadClassEither$1.apply(Classes.scala:140)
  org.specs2.control.ActionT$$anonfun$safe$1.apply(ActionT.scala:89)
  org.specs2.control.ActionT$$anonfun$reader$1$$anonfun$apply$6.apply(ActionT.scala:80)
  org.specs2.control.Status$.safe(Status.scala:100)

by vrepsys at May 23, 2015 10:07 AM

Ansible create directories from a list

I want to create some directories from a list I have in my vars/main.yml.

- app_root:
    network_prod:   "/var/www/prod/network/app"
    database_prod:  "/var/www/prod/db/app"

My tasks/main.yml so far has this:

- name: Create application directory structure
  file: 
    path: "{{ item }}"
    state: directory
    mode: 755
  with_items:
    - app_root

but doesn't work. I thought this could be achieved using with_dict and I also tried:

- name: Create application directory structure
  file: 
    path: "{{ item.value }}"
    state: directory
    mode: 755
  with_dict:
    - app_root

but I got: fatal: [vagrant.dev] => with_dict expects a dict.

I've read all about looping-over-hashes, but this doesn't seem to work.

The reason I'm using this notation is because I use these variables elsewhere as well and I need to know how to call them.

by axil at May 23, 2015 09:39 AM

Bypassing deprecation warnings in Scala 2.11

There is a nice way to avoid deprecation warnings in Scala 2.10 (and before) by invoking the deprecated method from a deprecated local def. Unfortunately, it doesn't work in Scala 2.11. Is there an alternative?

by Alexey Romanov at May 23, 2015 09:32 AM

Play Framework - add a field to JSON object

I have a problem with adding a field to Json object in Play Framework using Scala:

I have a case class containing data. For example:

case class ClassA(a:Int,b:Int)

and I am able to create a Json object using Json Writes:

val classAObject = ClassA(1,2)
implicit val classAWrites= Json.writes[ClassA]
val jsonObject = Json.toJson(classAObject)

and the Json would look like:

{ a:1, b:2 }

Let's suppose I would like to add an additional 'c' field to the Json object. Result:

{ a:1, b:2, c:3 }

How do I do that without creating a new case class or creating my Json object myself using Json.obj? I am looking for something like:

jsonObject.merge({c:3}) 

Any help appreciated!

by Paweł Kozikowski at May 23, 2015 09:29 AM

Preprocessing of sbt-twirl templates

I need to preprocess some Twirl templates in my sbt project. Is it possible to define a task in build.sbt that will run before Twirl plugin compiles its templates?

by tilex at May 23, 2015 09:25 AM

Ansible include task only if file exists

I'm trying to include a file only if it exists. This allows for custom "tasks/roles" between existing "tasks/roles" if needed by the user of my role. I found this:

- include: ...
  when: condition

But the Ansible docs state that:

"All the tasks get evaluated, but the conditional is applied to each and every task" - http://docs.ansible.com/playbooks_conditionals.html#applying-when-to-roles-and-includes

So

- stat: path=/home/user/optional/file.yml
  register: optional_file
- include: /home/user/optional/file.yml
  when: optional_file.stat.exists

Will fail if the file being included doesn't exist. I guess there might be another mechanism for allowing a user to add tasks to an existing recipe. I can't let the user to add a role after mine, because they wouldn't have control of the order: their role will be executed after mine.

by Leito at May 23, 2015 09:22 AM

QuantOverflow

What is this conditional probability?

I have been doing some reading for a project, and I have been seeing a lot of this kind of conditional probabilities on a "$\mathcal{F}_{t_i}$":

$$\mathbb{P} [C(t_{i+1})<y|\mathcal{F}_{t_i}]=\mathbb{P} [C(t_{i+1})<y|C(t_i)]$$

The problem is the $\mathcal{F}_{t_i}$, I want to know what it stands for. I know this is probably a stupid question, but I don't know where to look exactly.

by Naucle at May 23, 2015 09:10 AM

/r/types

CompsciOverflow

Unsupervised Learning: BCM or Oja's Rule

I am learning about unsupervised machine learning, and am a bit confused regarding different algorithms to update weights. So, I understand that both Oja's Rule and BCM can be used.

In Oja's rule:

dw/dt = k*x*y - w*y^2

Where x is the value at the input neuron, y is the value at the output neuron and w is the connection strength between the two. The idea is that this prevents weights from growing out of proportion.

In BCM:

dw/dt = k*(y-theta)*x

Where the idea is that unless my postsynaptic strength exceeds a threshold theta then I don't want my connectio to be strenghtened.

Studying competitive learning, which is yet another type of unsupervised learning I cam across another rule:

dw/dt = n*(x-y)

In this case however x is the full input vector and y is the vector representation of the output vector. The idea being that we move the prototype that responded the strongest to a given input closer to it, making the two more similar.

However, I don't understand when should I use which rule? For example, why couldn't I use a rule that combines both Oja's and BCM, hence only increasing connection weights when the output exceeds a given threshold, and preventing weights from growing out of proportion?

by max0005 at May 23, 2015 08:54 AM

QuantOverflow

Why Must Dividends Be Reinvested to Use Risk-Neutral Pricing?

Assume the price of a stock $S_t$ paying continuous dividend $a$ satisfies $$ dS_t = S_t\left((\mu - a)dt + \sigma dW_t\right). $$ The risk-neutral pricing formula states that if $\mathbb{Q}$ is any probability measure such that $e^{-rt}S_t$ is a $\mathbb{Q}$-martingale (MG), then the value of any self-financing replicating strategy $V_t$ that replicates a payoff $X = f(S_T)$ is $$ V_t = \mathbb{E}_{\mathbb{Q}}\left[e^{-r(T-t)}X|\mathcal{F}_t\right]. $$

So, discount $S_t$ and compute the differential $$ dZ_t := d(e^{-rt}S_t) = Z_t\left((\mu - a - r)dt + \sigma dW_t\right). $$ Define $\frac{d \mathbb{P}}{d \mathbb{Q}} = \exp\left(-\theta W_t -\frac{1}{2} \theta^2 t\right)$ and $\tilde{W}_t = W_t + \theta t$ for some $\theta$, which we determine by replacing $W_t$ with $\tilde{W}_t$ in $dZ_t$ and making it driftless: \begin{align*} dZ_t & = Z_t\left((\mu - a - r)dt + \sigma (d\tilde{W}_t - \theta dt)\right) \\ & = Z_t\left((\mu - a - r - \theta \sigma)dt + \sigma d\tilde{W}_t \right) \\ & = \sigma Z_td\tilde{W}_t, \end{align*} where we set $\theta = \frac{\mu - a - r}{\sigma}$. We have found a $\mathbb{Q}$ such that $Z_t$ is a $\mathbb{Q}$-MG, and so can price as usual.

However, Shreve (Stochastic Calculus for Finance II) on p. 235 instead derives the discounted portfolio process (with reinvested dividends), and finds the market price of risk $\theta$ for that process, showing that when this is done, the discounted stock process is indeed not a martingale. Why not do it the way I did?

by bcf at May 23, 2015 08:44 AM

StackOverflow

Predicting Probablities in Logistic Regression Model in Apache Spark MLib

I am working on Apache Spark to build the LRM using the LogisticRegressionWithLBFGS() class provided by MLib. Once the Model is built, we can use the predict function provided which gives only the binary labels as the output. I also want the probabilities to be calculated for the same.

There is an implementation for the same found in

https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/mllib/classification/LogisticRegression.scala

override protected def predictPoint(
  dataMatrix: Vector,
  weightMatrix: Vector,
  intercept: Double) = {
require(dataMatrix.size == numFeatures)

// If dataMatrix and weightMatrix have the same dimension, it's binary logistic regression.
if (numClasses == 2) {
  val margin = dot(weightMatrix, dataMatrix) + intercept
  val score = 1.0 / (1.0 + math.exp(-margin))
  threshold match {
    case Some(t) => if (score > t) 1.0 else 0.0
    case None => score
  }
} 

This method is not exposed, and also the probabilities are not available. Can I know how to use this function to get probabilities. The dot method which is used in the above function is also not exposed, it is present in the BLAS Package but it is not public.

by Vijeth Hegde at May 23, 2015 08:44 AM

GAMS programming-Defining Subsets

I have three sets, I and J and K, I know that for defining a subset in GAMS I should write it this way, I2(I) when set I2 is a subset of set I

The problem is that the third set, Set K, is a subset of both set I and J, and I don't know how to code that in GAMS.

Thanks in advance :)

PS Someone with enough reputation create a GAMS tag please, cause there isn't anything related to this subject in the list.

by Sepideh Ghajari at May 23, 2015 08:34 AM

Scala REPL startup error "class file is broken" [duplicate]

This question already has an answer here:

Every time after starting Scala 2.9.2 REPL (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0-ea) first line of code executing bring me an error:

scala> 1 + 2
error: error while loading CharSequence, class file '/usr/lib/jvm/java-8-oracle/jre/lib/rt.jar(java/lang/CharSequence.class)' is broken
(bad constant pool tag 15 at byte 1484)

Later during further evaluation in current REPL instance no similar errors are occured.

Has anyone any suggestion how to handle this behaviour?

by Yehor Nemov at May 23, 2015 07:57 AM

Java Swing automation testing/functional and non-functional

I need help regarding my Java Swing application testing. I have vast experience in mainframes technology but I do not have experience in Java. So I am kind of a beginner in Java technology and I am looking for a good way to automate Java Swing application. Can someone suggest a way to automate java swing application for a beginner?

by Garima Verma at May 23, 2015 07:48 AM

CompsciOverflow

Find maximum scoring characters with non overlapping occurrences from a string

I have a problem related to Episode Mining in which I need to find the maximal utility of a episode given its occurrences in an event sequence.But I am presenting the question in a different form so that it's easier to explain.

There is a long string, S, where each of its character have some positive score. Given another string, T, find a match with S containing all occurrences of the sequence of characters of T such that :-

The occurrences are non-overlapping. The sequence of characters in S must be same as present in T but it can be discontinous. Each occurrence should lie in a given window. The total score of a match can be found by simply adding the scores of the characters at each occurrence.The problem is to find a match with maximum score of all the matches possible.

Example - String S - a(2) b(3) e(1) d(10) d(7) c(1) a(5) d(8) b(5) d(6)

String T - a b d

Window size - 5

Two matches of string T are:-

[1,2,4], [7,9,10]. Score - [2+3+10] + [5+5+6]= 31 [1,2,5], [7,9,10]. Score - [2+3+7] + [5+5+6]= 28. And the score is maximum in match 1 so it is the required answer. We didn't consider the occurrence [1,2,8] or [1,2,10] as they are not in the given window as (8-1) > 5.

So, I would like to know if there is some solution to find the set of occurences or match that gives the maximum score efficiently.

by Sonam Rathore at May 23, 2015 07:12 AM

/r/compsci

Hardware/EE Background, Exploring different approaches to learning Comp Sci

Hi there, I graduated from college 2 years ago with a degree in EE and I've been doing hardware/network testing / systems admin jobs since then. I'm interested in self-teaching myself Comp Sci in hopes to convert to being a Software Engineer.

I'm at the early stages now, gathering information. One approach I learned from a friend is to learn one programming language pretty well and learn all the CS theory (e.g. data structures, algorithms, etc) using that language. Is C a good language to start with to build the CS theory from there? However, I heard that I'll need to know C++ or Java for object-oriented programming.

Where do I start? I'm open to new ideas and approaches to learning CS. I also don't want my hardware background to go to waste, maybe something in between would be good, like embedded systems. How do people make the jump from hardware to software? I've seen people with MS/PhDs in EE and they become software engineers in their first jobs out of school.

submitted by RCube123
[link] [1 comment]

May 23, 2015 06:57 AM

CompsciOverflow

Difference between a turing machine and a finite state machine?

I am doing a presentation about Turing machines and I wanted to give some background on FSM's before introducing Turing Machines. Problem is, I really don't know what is VERY different from one another.

Here's what I know it's different:

FSM has sequential states depending on the corresponding condition met while Turing machines operate on infinite "Tape" with a head which reads and writes.

There's more room for error in FSM's since we can easily fall on a non-ending state, while it's not so much for Turing machines since we can go back and change things.

But other than that, I don't know a whole lot more differences which make Turing machines better than FSM's.

Can you please help me?

by Julio Garcia at May 23, 2015 06:35 AM

TheoryOverflow

Counting the number of K4

I was going over this paper and I don't understand a certain proof (section five phase 2).

Given a graph G=(V,E) partitioned into the sets of vertices L and H. The vertices in L are at most D where D is an arbitrary number and the rest of the vertices are in H.

If we take a closer look at Phase 2 (Theorem 5). For each vertex x in L compute the square of the adjacency matrix of G[N(x)] to decide whether G[N(x)] contains a triangle.

We know that the fastest way to find a triangle in a graph is using fast matrix multiplication algorithm which is $O(n^\alpha)$. So, we can define a running time of $\sum_{x \epsilon L}^{ } d(x)^\alpha$. Since we know that |L| can be at most e (number of edges) if we choose a bad D and d(x) is at most D, why Phase 2 can be done in $O(D^{\alpha-1}e)$. I was under the impression that it was $O(D^{\alpha}e)$ i.e. where is the $\alpha-1$ coming from? I'm pretty sure it's trivial, but I'm missing the point.

by BryanS at May 23, 2015 06:30 AM

CompsciOverflow

Finding the best combinations between items of 2 arrays in a sequential manner

I'm reposting this because people found the last description to be too hard to follow.

  1. The data unit I'm working with is a pair of 2 numbers. The numbers can be any integer that is bigger than 0. Example:
    [X, Y]
  2. The input is 2 arrays of these pairs, each array can have any length. Example:
    A = {[1, 2], [3, 2], [5, 1]}
    B = {[2, 3], [4, 5], [5, 3]}
  3. If I combine 2 pairs, one from each array, I get a new pair like this:
    [X1, Y1] + [X2, Y2] = [X1 + X2, Y1 + Y2]
  4. The output is a result of all combinations between the elements of the 2 input's where each element is the "best" and is ordered as ascending by Y

What "best" means in this case is that given [X, Y],

  1. this pair would have the highest or equal to highest X compared to all the other pairs that might have the same Y Example: [2, 2], [2, 2],[1, 2]
  2. it cannot have an X equal or lower than the maximum of any other pair that has a lower Y Example: [2, 1], [4, 2],[2, 4]


To illustrate how the output can be reached

  1. Lets take this example input:
    A = {[1, 2], [3, 2], [5, 1]}
    B = {[2, 3], [4, 5], [5, 3]}
  2. First we combine every item in A with every item in B
    A[1] + B[1] = [3, 5], A[1] + B[2] = [5, 7], A[1] + B[3] = [6, 5]
    A[2] + B[1] = [5, 5], A[2] + B[2] = [7, 7], A[2] + B[3] = [8, 5]
    A[3] + B[1] = [7, 4], A[3] + B[2] = [9, 6], A[3] + B[3] = [10, 4]
  3. Next we order the combinations by Y
    [7, 4], [10, 4], [3, 5], [6, 5], [5, 5], [8, 5], [9, 6], [5, 7], [7, 7]
  4. Filter the combinations by the rules described above
    [10, 4]

So in this example, the output is an array with the length of 1 because all other pairs with Y of 4 have lower X and all other pairs with Y > 4 have equal or lower X.

I didn't include it in the example because its not necessary but the process can be optimized by prefiltering A and B by the same rules.

Now to my problem: I'm not dealing with arrays with the length of 3 but rather several hundred and the input count isn't 2 but 5+, as you can imagine, this process can be nested like this:
result5 = process(process(process(process(A, B), C), D, E)

So in practice, the potential memory use is A.size * B.size * C.size * D.size * E.size.. which is a lot more than fits in the 20something kB i have available.

What I'm looking for is an algorithm that will fetch me the same results, in the same order, one by one. The fetches will be sequential and will start from 0 so I think any algorithm that can produce all the results in the right order without sorting in the end can be modified for this. Does anybody know how this could be achieved?

by user29075 at May 23, 2015 06:11 AM

StackOverflow

Not all class methods listed in the scala online language reference?

I am looking at scala.io.Source. It has a bunch of

   fromXXXX()

methods. However in the online reference I do not see them.

Where are those methods - and more generally a comprehensive list of all methods on scala.io.Source - located?

by javadba at May 23, 2015 06:06 AM

Why is my Scala class not visible to its matching test class?

I am starting out in Scala with SBT, making a Hello World program.

Here's my project layout:

enter image description here

I've made sure to download the very latest JDK and Scala, and configure my Project Settings. Here's my build.sbt:

name := "Coursera_Scala"

version := "1.0"

scalaVersion := "2.11.6"

libraryDependencies += "org.scalatest" % "scalatest_2.11" % "2.2.4" % "test"

Hello.scala itself compiles okay:

package demo

class Hello {
  def sayHelloTo(name: String) = "Hello, $name!"
}

However, my accompanying HelloTest.scala does not. Here's the test:

package demo

import org.scalatest.FunSuite

class HelloTest extends FunSuite {

  test("testSayHello") {
    val result = new Hello.sayHelloTo("Scala")
    assert(result == "Hello, Scala!")
  }

}

Here's the error:

Error:(8, 22) not found: value Hello
    val result = new Hello.sayHello("Scala")
                 ^

In addition to the compile error, Intellij shows errors "Cannot resolve symbol" for the symbols Hello, assert, and ==. This leads me to believe that the build is set up incorrectly, but then wouldn't there be an error on the import?

by AndrewK at May 23, 2015 06:04 AM

QuantOverflow

How to pull stock exchange names for a list of tickers, bloomberg?

How to pull stock exchange names for a list of stocks with tickers, on bloomberg? Please advise the steps so as to paste the list of tickers without having to type tickers one by one.

by Gitanjali Mathur109 at May 23, 2015 05:53 AM

StackOverflow

core.typed not reporting type error in repl

Here is a part of the example taken from core.typed github repo:

(ns typedclj.rps-async
  (:require [clojure.core.typed :as t]
            [clojure.core.async :as a]
            [clojure.core.typed.async :as ta]))

(t/defalias Move
  "A legal move in rock-paper-scissors"
  (t/U ':rock ':paper ':scissors))

(t/defalias PlayerName
  "A player's name in rock-paper-scissors"
  t/Str)

(t/defalias PlayerMove
  "A move in rock-paper-scissors. A Tuple of player name and move"
  '[PlayerName Move])

(t/defalias RPSResult
  "The result of a rock-paper-scissors match.
  A 3 place vector of the two player moves, and the winner"
  '[PlayerMove PlayerMove PlayerName])

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; Implementation
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(t/ann MOVES (t/Vec Move))
(def MOVES [:rock :paper :scissors])

(t/ann BEATS (t/Map Move Move))
(def BEATS {:rock :scissors, :paper :rock, :scissors :paper})
(def BEATS {:a :b})

Note that in the last line I redefined BEATS to {:a :b}, which conflicts its type annotation, but when I eval this in the repl, no error is thrown. I was expecting an error because the latest version of core.typed is said to be able to report type errors at the runtime.

Here is the entire project.clj file:

            (defproject typedclj "0.1.0-SNAPSHOT"
                        :description "FIXME: write description"
                        :url "http://example.com/FIXME"
                        :license {:name "Eclipse Public License"
                                  :url  "http://www.eclipse.org/legal/epl-v10.html"}
                        :dependencies [[org.clojure/clojure "1.6.0"]
                                       [org.clojure/core.async "0.1.346.0-17112a-alpha" :exclusions [org.clojure/tools.analyzer.jvm]]
                                       [org.clojure/core.typed "0.2.92"]
                                       [clj-http "1.1.2"]
                                       [http-kit "2.1.18"]
                                       ]
                        :repl-options {:nrepl-middleware [clojure.core.typed.repl/wrap-clj-repl]}
                        :main ^:skip-aot typedclj.core
                        :target-path "target/%s"
                        :profiles {:uberjar {:aot :all}})

by qed at May 23, 2015 05:09 AM

Advogato

The Global Computer Employer Index

My aim in building The Global Computer Employer Index is to link directly to the "Jobs" or "Careers" section of tech company websites. Unlike the more-common job boards, I don't list individual jobs, rather it is up to each company to list them on their own sites. Job board posts are taken down when the position is filled, or when the term they are paid for expires. My listings are permanent.

Do not be dismayed if your company is not listed; just mail your homepage to mdcrawford@gmail.com There is no charge for this service nor will there ever be.

The page for Santa Cruz County explains how this started back in 1997. Last year I decided to cover the entire planet.

When I add new cities I post announcements on my What's New? page.

I take requests if you'd like me to post employers in any specific location.

May 23, 2015 05:06 AM

StackOverflow

Submitting spark applications to yarn programmatically

I feel that it is becoming a very common requirement to be able to submit spark applications programmatically to yarn. However there is no references about it in apache spark documentation. Is it even possible and if it is, is there a straight forward way to achieve it. Please advise.

by Amit at May 23, 2015 05:03 AM

Halfbakery

CompsciOverflow

equivalence of DFA, NFA,NFA-λ

How could we use the transition function of the respective automaton to show that the following relationships between the classes of finite state automation hold: DFA ⊆ NFA ⊆ NFA-λ

by user3709122 at May 23, 2015 04:37 AM

StackOverflow

How to filter a mixed-node graph on neighbor vertex types

This question is about Spark GraphX. I want to compute a subgraph by removing nodes that are neighbors of certain other nodes.

Example

[Task] Retain A nodes and B nodes that are not neighbors of C2 nodes.

Input graph:

                    ┌────┐
              ┌─────│ A  │──────┐
              │     └────┘      │
              v                 v
┌────┐     ┌────┐            ┌────┐     ┌────┐
│ C1 │────>│ B  │            │ B  │<────│ C2 │
└────┘     └────┘            └────┘     └────┘
              ^                 ^
              │     ┌────┐      │
              └─────│ A  │──────┘
                    └────┘

Output graph:

         ┌────┐
   ┌─────│ A  │
   │     └────┘
   v           
┌────┐         
│ B  │         
└────┘         
   ^           
   │     ┌────┐
   └─────│ A  │
         └────┘

How to elegantly write a GraphX query that returns the output graph?

by Pimin Konstantin Kefaloukos at May 23, 2015 04:22 AM

/r/emacs

if I define a variable with (defvar-local) why it is not available when searching it while using add-dir-local-variable?

I'm defining a variable like this:

(defvar-local shackra:var-python-ver 2 "Indica que versión de Python estamos usando, Python2 o Python3") 

What I'm doing is to use that variable later to change my Python configuration. The idea is to control this in a per project basis by adding it to a .dir-locals.el file. However, when I use add-dir-local-variable I cannot find my variable that I previously defined. I can use it in the *scratch* buffer.

What am I doing wrong?

submitted by shackra
[link] [comment]

May 23, 2015 04:09 AM

Planet Clojure

Ambly Using JavaScriptCore C API

The Ambly 0.4.0 release involves a revision to use the low-level JavaScriptCore C API. Ambly was previously using the new Objective-C API wrapper that Apple had introduced with iOS 7.

This change broadens the usability of Ambly, making it easier to compose with projects that don't use the Objective-C API (React Native, for example). But more importantly: It fundamentally makes it possible to use Ambly in projects that don't have JavaScriptCore built with JSC_OBJC_API_ENABLED. One example is Ejecta; it is now possible to use Ambly to live-code Ejecta using ClojureScript.




To make this change essentially involved rewriting portions of the Objective-C side of the Ambly REPL to use C, using JSGlobalContextRef and related low-level API calls instead of the higher-level JSContext affordances. This really amounted to re-implementing working code to simply use analogous constructs, but all of this is internal and transparent to users.

The only visible Ambly API changes are that JSGlobalContextRef must be passed in in places where JSContext was previously. Apple makes this nearly trivial with a couple of bridging methods:

+[JSContext contextWithJSGlobalContextRef:]
-[JSContext JSGlobalContextRef]

Additionally, one of the JSContext-specific ABYContextManager APIs has been deprecated as it is no longer relevant.

From my perspective, the biggest change for this release is that a lot of the code became a little more… let's say, cumbersome, owing to the verbosity of the C style and the need to do manual memory management. But in my opinion, this is definitely worth it.

Also, there is a possibility that Facebook may use JavaScriptCore as the JavaScript engine for React Native on Android. If that's the case, it may turn out that coming to grips with the JavaScriptCore C API may pay off later when updating Ambly to work with Android!

by Mike Fikes at May 23, 2015 04:00 AM

Lobsters

StackOverflow

Problems converting JSON objects in Scala

I'm trying to make a simple example of a class serialization in Scala using json4s library, but even after extensively searching for it on the internet, unfortunately I couldn't find any satisfatory sample that would solve my problem.

Basically I have a simple class called Person and I want to extract an instance of this class from a JSON string.

case class Person(
    val name: String,
    val age: Int,
    val children: Option[List[Person]]
)

So when I do this:

val jsonStr = "{\"name\":\"Socrates\", \"age\": 70}"
println(Serialization.read[Person](jsonStr))

I get this output:

"Person(Socrates,70,None)" // works fine!

But when I have no age parameter in JSON string, I get this error:

Exception in thread "main" org.json4s.package$MappingException: No usable value for age

I know that Person class has two required parameters in its constructor, but I would like to know if there's a way to make this conversion through a parser or something similar.

Also, I've tried to make this parser, but no success.

Thanks in advance for any help.

by Alexandre at May 23, 2015 03:43 AM

CompsciOverflow

Testing whether an analytic function vanishes identically

I have an application that basically reduces to testing whether a given function vanishes identically. The function is given symbolically, using unary and binary operators on complex numbers. For example, we might want to test the function $(z+1)^2-z^2-2z-1$. (It could be a function of more than one variable.)

The problem is known to be undecidable. However, there are reasonable heuristic approaches. The one I've been using involves numerical sampling. I just pick random complex values for $z$, and evaluate the function using machine-precision arithmetic. This works pretty well in most cases, and is efficient. If the function is known to be analytic, then the method would succeed with unit probability given infinite-precision arithmetic.

One could use a computer algebra system for this, but a CAS is computationally expensive and often will not reach any definite conclusion.

Although the numerical sampling heuristic generally works pretty well, it's not hard to come up with examples where it fails. For example, consider the function $z^{100}$. For almost any $|z|<1$, I get an underflow, and for almost any $|z|>1$, I get an overflow. Either way, the result is inconclusive. (An overflow doesn't automatically tell me the function is nonzero, because it could happen at some intermediate step in the computation.)

Can this heuristic be improved on? Using multiple-precision arithmetic rather than machine precision doesn't seem to be a big win. It's much more computationally expensive, and even if I take 100 digits of precision, it's still pretty easy to construct examples where it fails.

It seems like it might make sense to try some kind of adaptive algorithm in bad cases to search for regions of the complex plane that neither underflow nor overflow. Or more generally we could maintain error bounds, and search for regions where the error bounds do not make the result inconclusive.

Symbolic differentiation is cheap and doesn't fail, so I could also test whether the function's derivatives are zero. But this doesn't necessarily help for an example like $z^{100}$, unless I happen to get lucky and try the 100th derivative.

by Ben Crowell at May 23, 2015 03:38 AM

/r/freebsd

/r/emacs

QuantOverflow

In Dupire's paper, why is $(S_t, t)$ in the $(K, T)$ space?

I'm new to local volatility model.

From Dupire's paper and most of the textbooks, they derived the local volatility $\sigma(K, T)$ in the $(K, T)$ (i.e., strike and maturity) space, from call prices or the implied volatility surface.

However, by definition, local volatility is a function in terms of $(S_t, t)$, i.e., instantaneous underlying price and time.

How to relate these two?

by tcquant at May 23, 2015 02:51 AM

UnixOverflow

How can I change the default gateway?

Currently I'm running a FreeBSD 9.1 and the default gateway is already configured in the rc.conf.

rc.conf:

defaultrouter = "10.0.0.1"

But now I want to change the default gateway without rebooting the system, is this possible?

by WWW at May 23, 2015 02:31 AM

Planet Clojure

Clojure in Action - Book Review

I just finished reading this book a couple of minutes ago...so here's my small review...


The book is kinda big...with 434 pages...and it's the first book and my first approach to Clojure.

I have to say...I'm not a big fan of Java...I don't even actually like it...but Clojure...it's something else -:) With its Lisp like syntax and it's functional orientation...it's a delightful programming language...

The book itself it's a great introduction to have us ready for something else...but...as Clojure is still a young language...some of the examples don't work mainly because some keywords or libraries had become obsolete...gladly most of the examples work just out of the box...

There are also many examples that help to fully understand what Clojure is all about...





If you haven't heard about closures, recursion, higher-order functions or currying...the this book is for you...

While I'm planning to read more Clojure books, I can say that this book is the perfect way to get started...

Greetings,

Blag.
Development Culture.

by Alvaro "Blag" Tejada Galindo (noreply@blogger.com) at May 23, 2015 02:00 AM

Halfbakery

Planet Theory

NSA in P/poly: The Power of Precomputation

Even after the Snowden revelations, there remained at least one big mystery about what the NSA was doing and how.  The NSA’s classified 2013 budget request mentioned, as a priority item, “groundbreaking cryptanalytic capabilities to defeat adversarial cryptography and exploit internet traffic.”  There was a requested increase, of several hundred million dollars, for “cryptanalytic IT services” and “cryptanalysis and exploitation services program C” (whatever that was).  And a classified presentation slide showed encrypted data being passed to a high-performance computing system called “TURMOIL,” and decrypts coming out.  But whatever was going on inside TURMOIL seemed to be secret even within NSA; someone at Snowden’s level wouldn’t have had access to the details.

So, what was (or is) inside the NSA’s cryptanalytic black box?  A quantum computer?  Maybe even one that they bought from D-Wave?  (Rimshot.)  A fast classical factoring algorithm?  A proof of P=NP?  Commentators on the Internet rushed to suggest each of these far-reaching possibilities.  Some of us tried to pour cold water on these speculations—pointing out that one could envision many scenarios that were a little more prosaic, a little more tied to the details of how public-key crypto is actually used in the real world.  Were we just naïve?

This week, a new bombshell 14-author paper (see also the website) advances an exceedingly plausible hypothesis about what may have been the NSA’s greatest cryptanalytic secret of recent years.  One of the authors is J. Alex Halderman of the University of Michigan, my best friend since junior high school, who I’ve blogged about before.  Because of that, I had some advance knowledge of this scoop, and found myself having to do what regular Shtetl-Optimized readers will know is the single hardest thing in the world for me: bite my tongue and not say anything.  Until now, that is.

Besides Alex, the other authors are David Adrian, Karthikeyan Bhargavan, Zakir Durumeric, Pierrick Gaudry, Matthew Green, Nadia Heninger, Drew Springall, Emmanuel Thomé, Luke Valenta, Benjamin VanderSloot, Eric Wustrow, Santiago Zanella-Béguelink, and Paul Zimmermann (two of these, Green and Heninger, have previously turned up on Shtetl-Optimized).

These authors study vulnerabilities in Diffie-Hellman key exchange, the “original” (but still widely-used) public-key cryptosystem, the one that predates even RSA.  Diffie-Hellman is the thing where Alice and Bob first agree on a huge prime number p and a number g, then Alice picks a secret a and sends Bob ga (mod p), and Bob picks a secret b and sends Alice gb (mod p), and then Alice and Bob can both compute (ga)b=(gb)a=gab (mod p), but an eavesdropper who’s listening in only knows p, g, ga (mod p), and gb (mod p), and one can plausibly conjecture that it’s hard from those things alone to get gab (mod p).  So then Alice and Bob share a secret unknown to the eavesdropper, which they didn’t before, and they can use that secret to start doing cryptography.

As far as anyone knows today, the best way to break Diffie-Hellman is simply by calculating discrete logarithms: that is, solving the problem of recovering a given only g and h=ga (mod p).  At least on a classical computer, the fastest known algorithm for discrete logarithms (over fields of prime order) is the number field sieve (NFS).  Under plausible conjectures about the distribution of “smooth” numbers, NFS uses time that grows like exp((1.923+o(1))(log p)1/3(log log p)2/3), where the exp and logs are base e (and yes, even the lower-order stuff like (log log p)2/3 makes a big difference in practice).  Of course, once you know the running time of the best-known algorithm, you can then try to choose a key size (that is, a value of log(p)) that’s out of reach for that algorithm on the computing hardware of today.

(Note that the recent breakthrough of Antoine Joux, solving discrete log in quasipolynomial time in fields of small characteristic, also relied heavily on sieving ideas.  But there are no improvements from this yet for the “original” discrete log problem, over prime fields.)

But there’s one crucial further fact, which has been understood for at least a decade by theoretical cryptographers, but somehow was slow to filter out to the people who deploy practical cryptosystems.  The further fact is that in NFS, you can arrange things so that almost all the discrete-logging effort depends only on the prime number p, and not at all on the specific numbers g and h for which you’re trying to take the discrete log.  After this initial “precomputation” step, you then have a massive database that you can use to speed up the “descent” step: the step of solving ga=h (mod p), for any (g,h) pair that you want.

It’s a little like the complexity class P/poly, where a single, hard-to-compute “advice string” unlocks exponentially many inputs once you have it.  (Or a bit more precisely, one could say that NFS reveals that exponentiation modulo a prime number is sort of a trapdoor one-way function, except that the trapdoor information is subexponential-size, and given the trapdoor, inverting the function is still subexponential-time, but a milder subexponential than before.)

The kicker is that, in practice, a large percentage of all clients and servers that use Diffie-Hellman key exchange use the same few prime numbers p.  This means that, if you wanted to decrypt a large fraction of all the traffic encrypted with Diffie-Hellman, you wouldn’t need to do NFS over and over: you could just do it for a few p‘s and cache the results.  This fact can singlehandedly change the outlook for breaking Diffie-Hellman.

The story is different depending on the key size, log(p).  In the 1990s, the US government insisted on “export-grade” cryptography for products sold overseas (what a quaint concept!), which meant that the key size was restricted to 512 bits.  For 512-bit keys, Adrian et al. were able to implement NFS and use it to do the precomputation step in about 7 days on a cluster with a few thousand cores.  After this initial precomputation step (which produced 2.5GB of data), doing the descent, to find the discrete log for a specific (g,h) pair, took only about 90 seconds on a 24-core machine.

OK, but no one still uses 512-bit keys, do they?  The first part of Adrian et al.’s paper demonstrates that, because of implementation issues, even today you can force many servers to “downgrade” to the 512-bit, export-grade keys—and then, having done so, you can stall for time for 90 seconds as you figure out the session key, and then do a man-in-the-middle attack and take over and impersonate the server.  It’s an impressive example of the sort of game computer security researchers have been playing for a long time—but it’s really just a warmup to the main act.

As you’d expect, many servers today are configured more intelligently, and will only agree to 1024-bit keys.  But even there, Adrian et al. found that a large fraction of servers rely on just a single 1024-bit prime (!), and many of the ones that don’t rely on just a few other primes.  Adrian et al. estimate that, for a single 1024-bit prime, doing the NFS precomputation would take about 45 million years using a single core—or to put it more ominously, 1 year using 45 million cores.  If you built special-purpose hardware, that could go down by almost two orders of magnitude, putting the monetary cost at a few hundred million dollars, completely within the reach of a sufficiently determined nation-state.  Once the precomputation was done, and the terabytes of output stored in a data center somewhere, computing a particular discrete log would then take about 30 days using 1 core, or mere minutes using a supercomputer.  Once again, none of this is assuming any algorithmic advances beyond what’s publicly known.  (Of course, it’s possible that the NSA also has some algorithmic advances; even modest ones could obviate the need for special-purpose hardware.)

While writing this post, I did my own back-of-the-envelope, and got that using NFS, calculating a 1024-bit discrete log should be about 7.5 million times harder than calculating a 512-bit discrete log.  So, extrapolating from the 7 days it took Adrian et al. to do it for 512 bits, this suggests that it might’ve taken them about 143,840 years to calculate 1024-bit discrete logs with the few thousand cores they had, or 1 year if they had 143,840 times as many cores (since almost all this stuff is extremely parallelizable).  Adrian et al. mention optimizations that they expect would improve this by a factor of 3, giving us about 100 million core-years, very similar to Adrian et al.’s estimate of 45 million core-years (the lower-order terms in the running time of NFS might account for some of the remaining discrepancy).

Adrian et al. mount a detailed argument in their paper that all of the details about NSA’s “groundbreaking cryptanalytic capabilities” that we learned from the Snowden documents match what would be true if the NSA were doing something like the above.  The way Alex put it to me is that, sure, the NSA might not have been doing this, but if not, then he would like to understand why not—for it would’ve been completely feasible within the cryptanalytic budget they had, and the NSA would’ve known that, and it would’ve been a very good codebreaking value for the money.

Now that we know about this weakness of Diffie-Hellman key exchange, what can be done?

The most obvious solution—but a good one!—is just to use longer keys.  For decades, when applied cryptographers would announce some attack like this, theorists like me would say with exasperation: “dude, why don’t you fix all these problems in one stroke by just, like, increasing the key sizes by a factor of 10?  when it’s an exponential against a polynomial, we all know the exponential will win eventually, so why not just go out to where it does?”  The applied cryptographers explain to us, with equal exasperation in their voices, that there are all sorts of reasons why not, from efficiency to (maybe the biggest thing) backwards-compatibility.  You can’t unilaterally demand 2048-bit keys, if millions of your customers are using browsers that only understand 1024-bit keys.  On the other hand, given the new revelations, it looks like there really will be a big push to migrate to larger key sizes, as the theorists would’ve suggested from their ivory towers.

A second, equally-obvious solution is to stop relying so much on the same few prime numbers in Diffie-Hellman key exchange.  (Note that the reason RSA isn’t vulnerable to this particular attack is that it inherently requires a different composite number N for each public key.)  In practice, generating a new huge random prime number tends to be expensive—taking, say, a few minutes—which is why people so often rely on “standard” primes.  At the least, we could use libraries of millions of “safe” primes, from which a prime for a given session is chosen randomly.

A third solution is to migrate to elliptic-curve cryptography (ECC), which as far as anyone knows today, is much less vulnerable to descent attacks than the original Diffie-Hellman scheme.  Alas, there’s been a lot of understandable distrust of ECC after the DUAL_EC_DBRG scandal, in which it came out that the NSA backdoored some of NIST’s elliptic-curve-based pseudorandom generators by choosing particular elliptic curves that it knew how handle.  But maybe the right lesson to draw is mod-p groups and elliptic-curve groups both seem to be pretty good for cryptography, but the mod-p groups are less good if everyone is using the same few prime numbers p (and those primes are “within nation-state range”), and the elliptic-curve groups are less good if everyone is using the same few elliptic curves.  (A lot of these things do seem pretty predictable with hindsight, but how many did you predict?)

Many people will use this paper to ask political questions, like: hasn’t the NSA’s codebreaking mission once again usurped its mission to ensure the nation’s information security?  Doesn’t the 512-bit vulnerability that many Diffie-Hellman implementations still face, as a holdover from the 1990s export rules, illustrate why encryption should never be deliberately weakened for purposes of “national security”?  How can we get over the issue of backwards-compatibility, and get everyone using strong crypto?  People absolutely should be asking such questions.

But for readers of this blog, there’s one question that probably looms even larger than those of freedom versus security, openness versus secrecy, etc.: namely, the question of theory versus practice.  Which “side” should be said to have “won” this round?  Some will say: those useless theoretical cryptographers, they didn’t even know how their coveted Diffie-Hellman system could be broken in the real world!  The theoretical cryptographers might reply: of course we knew about the ability to do precomputation with NFS!  This wasn’t some NSA secret; it’s something we discussed openly for years.  And if someone told us how Diffie-Hellman was actually being used (with much of the world relying on the same few primes), we could’ve immediately spotted the potential for such an attack.  To which others might reply: then why didn’t you spot it?

Perhaps the right lesson to draw is how silly such debates really are.  In the end, piecing this story together took a team that was willing to do everything from learning some fairly difficult number theory to coding up simulations to poring over the Snowden documents for clues about the NSA’s budget.  Clear thought doesn’t respect the boundaries between disciplines, or between theory and practice.

(Thanks very much to Nadia Heninger for reading this post and correcting a few errors in it.  For more about this, see Bruce Schneier’s post or Matt Green’s post.)

by Scott at May 23, 2015 01:41 AM

/r/compsci

I'm thinking about going to grad school but my GPA isn't perfect.

I know I want to major in Computer Science. Is there any hope for me getting in anywhere?

submitted by throwaway_comp_sci
[link] [15 comments]

May 23, 2015 01:32 AM

/r/freebsd

Why are my VMs with UFS giving ZFS warnings on the login screen?

I installed 2 FreeBSD 10.1 VMs in vSphere 6. Both gave me the following notice on the first reboot after install. They have identical resource allocations for compute, memory, and storage.

Trying to mount root from ufs:/dev/da0p2 [rw]... ZFS NOTICE: Prefetch is disabled by default if less than 4GB of RAM is present; to enable, add "vfs.zfs.prefetch_disable=0" to /boot/loader.conf. ZFS filesystem version: 5 ZFS storage pool version: features support (5000) 

Those lines were taken from the output of dmesg, but this group of messages appeared at the login prompt (I was using the vSphere console). So the input of typing my username showed up immediately after the (5000) at the end of the last line, and the word "login:" appeared before the ZFS NOTICE: portion.

The strange thing is that these lines are the only ocurences of 'ZFS' in all of the dmesg log, so I'm not sure what is trying to invoke ZFS or its tools. These are VMs with iSCSI SAN block devices; I used the UFS filesystem and you can see the first line mounting ufs:/dev/da0p2 [rw]....

The second strange thing is that one of them continues to do this on every reboot. The other VM stopped after the first reboot and hasn't done it since.

What gives?

submitted by bbbryson
[link] [4 comments]

May 23, 2015 01:23 AM

/r/emacs

Filetype-specific indent

Hello,

I'm recently moved from Vim to Emacs (with evil!). Up to now, I am enjoying it: emacs has access to some really powerful plugins packages.

That being said, I haven't been able to emulate filetype-specific indentation in emacs.

Example

try: function()<cursor> 

We just finished writing the body of a try statement in Python. We then insert a newline and write "except". Emacs has the following behaviour when doing <Ret>except

try: function() except<cursor> 

Bad indentation: we should have done <Ret><Backspace>except, i.e. clear one indentation level before writing except.

In Vim, writing <Ret>except

try: function() except<cursor> 

Essentially, vim automatically adjusts the indentation of the except statement as soon as the user finishes writing it.

This behaviour is triggered by the option filetype indent plugin on. Every time a user writes a closing statement, vim adjusts the indentation to the matching opening statement, jumping multiple indentations if necessary.

QUESTION: Is there a Emacs equivament to this? If not, are there language specific packages that exist and can do that?

TL;DR: What is the emacs equivalent of filetype indent plugin on, if it exists?

submitted by morphheus
[link] [5 comments]

May 23, 2015 01:01 AM

StackOverflow

Specifying utf-8 in the content type in Scalatra when using json4s to serialize response

I have a route defined as follows in Scalatra:

class SomeRoute extends ScalatraServlet with JacksonJsonSupport {
    protected implicit val jsonFormats: Formats = DefaultFormats.withBigDecimal ++ org.json4s.ext.JodaTimeSerializers.all

    before() {
        contentType = formats("json")
    }

    get("/") {
        Ok(MyCaseClass("Hello world"))
    }

....

In the response, the content type is specified as "application/json". How do I set it to "application/json; charset=utf-8" instead? In the before(), the formats("json") is required to serialize the response to JSON. If I try to pass in my own headers in the Ok and set the content type there, it gets overwritten. If I try to append "charset=utf-8" to formats("json"), it fails to serialize.

by Jay Hu at May 23, 2015 12:59 AM

Equivalent of Rust's try! macro in Scala

Rust's try! macro unwraps Results. Ok values are unwrapped; Err causes the enclosing method to return immediately with Err. Implementation here: https://doc.rust-lang.org/std/macro.try!.html

This is roughly equivalent to Scala Option.getOrElse(return None).

Is it possible to write the equivalent macro in Scala for Options? It seems the macro would need to check that the enclosing method's return type is Option. I found some relevant discussion here: https://groups.google.com/forum/#!topic/scala-user/BH0xz74f4Zk If so, how?

This would be really nice. It could perhaps be generalized to unwrap other types such as Future and Try. This is weaker than but similar to what the effectful project accomplishes: https://github.com/pelotom/effectful. In fact, I suppose you could achieve this using effectful but effectful requires the entire block to be enclosed in the macro, while try! uses return to allow a more local syntax, which is (arguably) nicer.

by tksfz at May 23, 2015 12:49 AM

Any tips for re-writing big projects in a new programming language? [on hold]

So in school, I wrote a ray tracer and a bunch of complementary functions in Racket (Lisp dialect, for context) as a class assignment. I'm learning scala/java right now, and I had the idea of re-doing the project in java as a learning exercise. Any tips on getting started? Like, would I be better off looking at the functions and thinking about how they'd work in java syntax, or should I really try and re-do the whole thing from scratch?

Fairly new developer ( < 3 years coding) so I'm still learning about structuring the process. Thanks!

by RC Clayton at May 23, 2015 12:37 AM

How to tie a string replace to a command in Light Table

My work wants us to use left and right double quotes while typing documentation. I want to use the LaTeX style ones because I write papers in LaTeX often and I already type them automatically.

I am new to clojure but did manage to find this:

(def mystring "``quoted string''")
(clojure.string/replace mystring #"``|''" {"``" "“" "''" "”"})

This will output:

“quoted string”

So I want to tie this functionality to a command using keybindings. I was going to ask how to tie the above command. But then I read this bit on how standard clojure libraries don't integrate so well with LightTable: How to integrate libraries (clojars) into Lightable plugins

I keep reading about regexs. Is there a way to apply a regex across an entire file?

What I'm thinking is I will type up the document and then at some point, hit (ctrl-i) or whatever and have it automatically replace the LaTeX characters with my work's desired characters.

If it was possible to have something auto-replace them while I type, that would be amazing. But I'm new to this so going with baby steps.

by Darrell at May 23, 2015 12:33 AM

/r/compsci

continuations in language design

I am designing a dialect of lisp as an exercise, and I've come upon a question about continuations.

One-shot continuations appear simple to implement using the structures I'm considering (heap-allocated activation records with static and dynamic parents), and can be used as the basis for many control structures like generators, coroutines, exceptions, etc.

On the other hand, scheme-style multi-shot continuations appear to be very non-trivial to implement, requiring call-stack copying, or copy-on-write activation records, or other such expensive operations.

I'd like to avoid complicating my implementation unnecessarily, so I am wondering what kinds of things can be done with multi-shot continuations that cannot be done with one-shot continuations. Is there anything important or useful there, or are the uses of explicitly multi-shot continuations all esoteric?

I know that multi-shot continuations can be used to implement backtracking. How much simpler do they make it?

submitted by deepcleansingguffaw
[link] [2 comments]

May 23, 2015 12:30 AM

/r/clojure

Android Studio plugin?

Is there an Android Studio plugin for clojure?

submitted by gilded_honour
[link] [13 comments]

May 23, 2015 12:29 AM

StackOverflow

Rare (intermittent) java.nio.charset.MalformedInputException in ScalaCheck

I'm getting a very rare, but repeatable, MalformedInputException in my ScalaCheck code.

I haven't been able to pin it down perfectly, or get a solid reproduction except "occasionally," but here's the code that I believe is generating the problem:

// Generate varying Unicode characters:
val unicodeCharacter = Gen.choose(Char.MinValue, Char.MaxValue).filter(Character.isDefined)

// Generate varying Unicode strings across all legal characters::
def unicodeGenerator(generator: Gen[Char] = unicodeCharacter, minimum: Int = 5, maximum: Int = 20): Gen[String] = Gen.chooseNum(minimum, maximum).flatMap { n =>
    Gen.sequence[String, Char](List.fill(n)(generator))
}

// The unit test that I think is occasionally blowing up:
"random strings longer than 20 characters" ! prop { (s: String) => { s.length > 20 must beTrue } }.setGen(unicodeGenerator(unicodeCharacter, 21, 30))

And here's the exception that I've seen:

Exception in thread "Thread-391" java.nio.charset.MalformedInputException: Input length = 1
    at java.nio.charset.CoderResult.throwException(CoderResult.java:281)
    at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:285)
    at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125)
    at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:135)
    at java.io.Writer.write(Writer.java:157)
    at scala.xml.XML$.write(XML.scala:108)
    at scala.xml.XML$$anonfun$save$2.apply$mcV$sp(XML.scala:91)
    at scala.xml.XML$$anonfun$save$2.apply(XML.scala:91)
    at scala.xml.XML$$anonfun$save$2.apply(XML.scala:91)
    at scala.util.control.Exception$Catch.apply(Exception.scala:102)
    at scala.xml.XML$.save(XML.scala:90)
    at sbt.JUnitXmlTestsListener.writeSuite(JUnitXmlTestsListener.scala:170)
    at sbt.JUnitXmlTestsListener.endGroup(JUnitXmlTestsListener.scala:159)
    at sbt.React$$anonfun$react$8.apply(ForkTests.scala:133)
    at sbt.React$$anonfun$react$8.apply(ForkTests.scala:133)
    at scala.collection.immutable.List.foreach(List.scala:318)
    at sbt.React.react(ForkTests.scala:133)
    at sbt.ForkTests$$anonfun$mainTestTask$1$Acceptor$2$.run(ForkTests.scala:74)
    at java.lang.Thread.run(Thread.java:745)
Internal error when running tests: sbt.ForkMain$Run$RunAborted: java.net.SocketException: Broken pipe

Anyone have any ideas what is causing it and more important how to reliably prevent it?

by Zac at May 23, 2015 12:16 AM

/r/compilers

Advice for type inference?

Hello, /r/compilers. I'm working on my first compiler with type inference and I have absolutely no clue how I'm going to implement it other than bruteforcing the types from the AST. Is there any clean way of implementing this? (I don't mean just at the declaration site, I'd like to be able to infer for function return types, argument types, etc based on context... Something similar to ML, I guess).

submitted by FalconGames109
[link] [10 comments]

May 23, 2015 12:01 AM

Fefe

Ein Mitarbeiter der US-Botschaft in London soll monatelang ...

Ein Mitarbeiter der US-Botschaft in London soll monatelang von seinem Arbeitsplatz-Rechner aus Sex-Erpressungen durchgeführt haben. Der schrieb Frauen an, behauptete, er habe von ihnen Nacktfotos, und forderte Geld. Und das hat der von seinem Arbeitsplatzrechner aus gemacht. Über Monate.

May 23, 2015 12:01 AM

HN Daily

May 22, 2015

/r/netsec

QuantOverflow

What's the point of discounting in risk-neutral pricing?

Let $\phi$ be a self-financing strategy that replicates a time $T$ option payoff $X$ on stock $S$. By definition of a trading strategy, $\phi$ is previsible. Finally, let $V_t$ be the time $t$ value of the portfolio implementing $\phi$.

Usual theorem: If $S_te^{-rt}$ is a $\mathbb{Q}$-martingale then $V_t = \mathbb{E}_\mathbb{Q}[e^{-r(T-t)}X|\mathcal{F}_t]$.

Note if the price of the option were anything other than $V_t$, we would have arbitrage.

My Claim: If $S_t$ is a $\mathbb{Q}$-martingale then $V_t = \mathbb{E}_\mathbb{Q}[X|\mathcal{F}_t]$.

No discounting necessary, and again this $\phi$ is a replicating self-financing strategy, so the value of the option must again be $V_t$ for all $t$.

Proof of My Claim. Consider discrete time and let $S_t$ be a $\mathbb{Q}$-martingale. By definition of a self-financing strategy, $\Delta V_{t+1} = \phi_{t+1}\Delta S_{t+1}$. where $\Delta V_{t+1} = V_{t+1} - V_t$. Hence \begin{align*} \mathbb{E}_\mathbb{Q}[\Delta V_{t+1}|\mathcal{F}_t] =\mathbb{E}_\mathbb{Q}[\phi_{t+1}\Delta S_{t+1}|\mathcal{F}_t] =\phi_{t+1}\mathbb{E}_\mathbb{Q}[\Delta S_{t+1}|\mathcal{F}_t] = 0, \end{align*} where the second equality is because $\phi$ is previsible. So $V_t$ is a $\mathbb{Q}$-martingale, and \begin{align*} V_t = \mathbb{E}_\mathbb{Q}[V_T | \mathcal{F}_t] = \mathbb{E}_\mathbb{Q}[X | \mathcal{F}_t], \end{align*} where the second equality is because $\phi$ replicates $X$.


What am I missing? Why do we only consider discounted stock prices, and hence what's the point of discounting the expectation? The only good reason I can think of for considering only discounted stock prices is that martingale representation theorem guarantees the existence of self-financing strategies in this case. But still, the usual theorem is valid for any self-financing strategy, so the discounting still seems unnecessary.

by bcf at May 22, 2015 11:44 PM

TheoryOverflow

Sparser Bipartite graphs?

Maximal Planar Bipartite graphs are sparser than maximal planar graphs. For which other classes of graphs are maximal Bipartite members sparser than arbitrary maximal members.

Let $\mathcal{C}$ be a class of graphs and let $\mathcal{B}$ be the class of bipartite graphs. We say that $\mathcal{C}$ is biparted if $\rho(\mathcal{C}) > \rho(\mathcal{B}\cap\mathcal{C})$ (where $\rho(\mathcal{C})$ is defined below) i.e. the asymptotic density of arbitrary graphs in the class is larger than the asymptotic density of the bipartite graphs in the same class.

Examples of biparted graphs are:

  • Planar Graphs ($\rho(\mathcal{C}) = 6 > 4 = \rho(\mathcal{C}\cap \mathcal{B})$)
  • Graphs of finite genus (same parameters as above)
  • Subclasses of planar graphs e.g. outerplanar graphs

My question is:

Are there any other natural classes of biparted graphs?

For a graph $G$, let $\rho(G) = \frac{2|E(G)|}{|V(G)|}$ be the average degree of a vertex. We say that a class of graphs $\mathcal{C}$ is maximally $\rho$-sparse - written as $\rho = \rho(\mathcal{C})$ i.e. the maximum of average density (with maximum taken over all graphs from $\mathcal{C}$ with sufficiently many vertices) approaches $\rho$.

Note: Planar graphs can be proved to be biparted because Euler formula holds. The same is the case with bounded genus graphs. What about bounded tree-width graphs, graphs excluding a finite set of fixed minors,...?

by SamiD at May 22, 2015 11:28 PM

StackOverflow

Conditionally apply a function in Scala - How to write this function?

I need to conditionally apply a function f1 to the elements in a collection depending on the result of a function f2 that takes each element as an argument and returns a boolean. If f2(e) is true, f1(e) will be applied otherwise 'e' will be returned "as is". My intent is to write a general-purpose function able to work on any kind of collection.

c: C[E] // My collection 
f1 = ( E => E )  // transformation function
f2 = ( E => Boolean )  // conditional function

I cannot come to a solution. Here's my idea, but I'm afraid I'm in high-waters

/* Notice this code doesn't compile ~ partially pseudo-code */
conditionallyApply[E,C[_](c: C[E], f2: E => Boolean, f1: E => E): C[E] = {
  @scala.annotation.tailrec
  def loop(a: C[E], c: C[E]): C[E] = {
    c match {
      case Nil => a // Here head / tail just express the idea, but I  want to use a generic collection
      case head :: tail => go(a ++ (if f2(head) f1(head) else head ), tail)
    }
  }
  loop(??, c)  // how to get an empty collection of the same type as the one from the input?
}

Could any of you enlighten me?

by Max at May 22, 2015 11:21 PM

Created unicode & unicode without whitespace generators in ScalaCheck

During testing we want to qualify unicode characters, sometimes with wide ranges and sometimes more narrow. I've created a few specific generators:

// Generate a wide varying of Unicode strings with all legal characters (21-40 characters):
val latinUnicodeCharacter = Gen.choose('\u0041', '\u01B5').filter(Character.isDefined)

// Generate latin Unicode strings with all legal characters (21-40 characters):
val latinUnicodeGenerator: Gen[String] = Gen.chooseNum(21, 40).flatMap { n =>
    Gen.sequence[String, Char](List.fill(n)(latinUnicodeCharacter))
}

// Generate latin unicode strings without whitespace (21-40 characters): !! COMES UP SHORT...
val latinUnicodeGeneratorNoWhitespace: Gen[String] = Gen.chooseNum(21, 40).flatMap { n =>
    Gen.sequence[String, Char](List.fill(n)(latinUnicodeCharacter)).map(_.replaceAll("[\\p{Z}\\p{C}]", ""))
}

The latinUnicodeCharacter generator picks from characters ranging from standard latin ("A," "B," etc.) up to higher order latin character (Germanic/Nordic and others). This is good for testing latin-based character input for, say, names.

The latinUnicodeGenerator creates strings of 21-40 characters in length. These strings include horizontal space (not just a space character but other "horizontal space").

The final example, latinUnicodeGeneratorNoWhitespace, is used for say email addresses. We want the latin characters but we don't want spaces, control codes, and the like. The problem: Because I'm mapping the final result String and filtering out the control characters, the String shrinks and I end up with a total length that is less than 21 characters (sometimes).

So the question is: How can I implement latinUnicodeGeneratorNoWhitespace but do it inside the generator in such a way that I always get 21-40 character strings?

by Zac at May 22, 2015 11:20 PM

CompsciOverflow

Better algorithm for determining if a vertex is on any cycle in a graph

The problem I'm facing is the following:

Given a simple undirected graph $G=(V,E)$ and a vertex $u \in V$, answer if $u$ is part of any cycle of $G$.

The algorithm I can think of is to remove an edge from $u$ to one of its neighbours $v$ at a time and ask if there is still a path from $u$ to $v$ in the resulting graph, placing $(u,v)$ back before the next iteration.

I estimate the time complexity as $O(\Delta\cdot(m+n))$, with $\Delta$ being the greatest degree of the graph and $(m+n)$ for using a BFS on each iterarion.

Is there a better algorithm than this one? Does a simple DFS solve this problem and I'm just losing time with this?


BFS: Breadth-First Search
DFS: Depth-First Seatch

by araruna at May 22, 2015 11:15 PM

StackOverflow

How can I reduce the number of test cases ScalaCheck generates?

I'm trying to solve two ScalaCheck (+ specs2) problems:

  1. Is there any way to change the number of cases that ScalaCheck generates?

  2. How can I generate strings that contain some Unicode characters?

For example, I'd like to generate about 10 random strings that include both alphanumeric and Unicode characters. This code, however, always generates 100 random strings, and they are strictly alpha character based:

"make a random string" in {
    def stringGenerator = Gen.alphaStr.suchThat(_.length < 40)
    implicit def randomString: Arbitrary[String] = Arbitrary(stringGenerator)

    "the string" ! prop { (s: String) => (s.length > 20 && s.length < 40) ==> { println(s); success; } }.setArbitrary(randomString)
}

Edit

I just realized there's another problem:

  1. Frequently, ScalaCheck gives up without generating 100 test cases

Granted I don't want 100, but apparently my code is trying to generate an overly complex set of rules. The last time it ran, I saw "gave up after 47 tests."

by Zac at May 22, 2015 11:10 PM

Fefe

Ich habe in gatling ja einen gammeligen SMB-Server ...

Ich habe in gatling ja einen gammeligen SMB-Server implementiert, der gerade genug SMB spricht, um den Download von Dateien zu ermöglichen. Nun probiert Windows aber seit geraumer Zeit bei UNC-Pfaden (\\server\share\datei.zip) erst webdav über http aus. Also habe ich mich mal hingesetzt und ein Minimal-Webdav implementiert. Gerade genug, damit Windows mit mir sprechen mag.

Dann wollte ich damit mal eine 33 GB-Datei kopieren (denn wir reden hier vom Jahr 2015, da will man große Dateien kopieren können!). Was passierte? Wie sich rausstellt, hat der Webdav-Client in Windows ein implizites 50-MB-Dateigrößenlimit. Nein, wirklich! 50 MB!

Wie das bei Windows so ist, kann man das in der Registry hochsetzen. Ich geh also zur Registry, und was sehe ich da? Das ist ein DWORD. Ein 32-bit-Wert. Die größte Zahl, die in ein DWORD reinpasst, ist? Richtig! 4 GB!

Mit anderen Worten: Eine 33 GB-Datei über Webdav übertragen geht nicht mit Windows.

Naja, denkt man sich, mit SMB geht das. Und wenn Windows sieht, dass die Datei zu groß ist für Webdav, dann probiert er sicherlich danach SMB, oder?

Nein, tut er nicht. Nicht nur das: Er hat sich dann gemerkt, dass dieser Server Webdav spricht, und probiert SMB bis zum nächsten Reboot oder so gar nicht mehr.

Es ist echt nicht zu fassen. Was zur Hölle!?

May 22, 2015 11:01 PM

StackOverflow

Why does a for comprehension expand to a `withFilter`

I'm working on a DSL for relational (SQL-like) operators. I have a Rep[Table] type with an .apply: ((Symbol, ...)) => Obj method that returns an object Obj which defines .flatMap: T1 => T2 and .map: T1 => T3 functions. As the type Rep[Table] does not know anything about the underlying table's schema, the apply method acts like a projection - projecting only the fields specified in the argument tuple (a lot like the untyped scalding api). Now type T1 is a "tuple-like", its length constrained to the projection tuple's length by some shapeless magic, but otherwise the types of the tuple elements are decided by the api user, so code like

val a = loadTable(...)
val res = a(('x, 'y)).map { (t: Row2[Int, Int]) =>
  (1, t(0))
}

or

val res = a(('x, 'y)).map { (t: Row2[String, String]) =>
  (1, t(0))
}

works fine. Note that the type of the argument for the map/flatMap function must be specified explicitly. However, when I try to use it with a for comprehension

val a = loadTable(...)
val b = loadTable(...)
val c = loadTable(...)

val res = for {
  as: Row2[Int, Int] <- a(('ax, 'ay))
  bs: Row2[Int, Int] <- b(('bx, 'by))
  cs: Row2[Int, Int] <- c(('cx, 'cy))
} yield (as(0), as(1), bs(0), bs(1), cs(0), cs(1))

it complains about the lack of a withFilter operator. Adding an .withFilter: T1 => Boolean does not cut it - it then complains about "missing parameter type for expanded function", as T1 is parameterised by some type. Only adding .withFilter: Row[Int, Int] => Boolean makes it work, but obviously is not what I want at all.

My questions are: why does the withFilter get called in the first place and how can I use it with my parameterised tuple-like type T1?


Edit In the end I went with a .withFilter: NothingLike => BoolLike which is a noop for simple checks like _.isInstanceOf[T1] and a more restricted .filter: T1 => BoolLike to be used in general case.

by Pyetras at May 22, 2015 10:56 PM

Get object of case class from regex match

i'm working on scrapping data from a webpage with scala regex-es, but i encountered problem with parsing result to object of some case class-es.

In following snippet i managed to scrapp all the data, but i have no clue how to parse 3 elements from an iterator. I thought about something like:

val a :: b :: c :: _ = result.group(0).iDontKnowWha

Any ideas what can i do?

import model.FuneralSchedule
import play.api.libs.json.Json
import scala.io.Source

var date = "2015-05-05"
val source = Source.fromURL("http://zck.krakow.pl/?pageId=16&date=" + date).mkString
val regex = "(?s)<table>.+?(Cmentarz.+?)<.+?</table>".r
var thing: List[FuneralSchedule] = List()
var jsonFeed: List[Funeral] = List()
val regMatcher = "("

case class Funeral(hour: String, who: String, age: String) {
  override def toString: String = {
    "Cos"
  }
}

//implicit val format = Json.format[Funeral]
val out = regex.findAllIn(source).matchData foreach { table =>
  thing ::= FuneralSchedule(table.group(1), clearStrings(table.group(0)))
  """<tr\s?>.+?</\s?tr>""".r.findAllIn(clearStrings(table.group(0))).matchData foreach { tr =>
    //TODO: Naprawic bo szlak trafia wydajnosc
    val temp = """<td\s?>.+?</\s?td>""".r.findAllIn(tr.group(0)).matchData.foreach {
      elem => println(elem)
    }
    //println(Json.toJson(thingy))
  }
  println("Koniec tabeli")
}
thing
//Json.toJson(jsonFeed)
println(removeMarkers("<td > <td> Marian Debil </ td>"))
def removeMarkers(s: String) = {
  s.replaceAll( """(</?\s?td\s?>)""", "")
}
def clearStrings(s: String) = {
  val regex = "((class=\".+?\")|(id=\".+?\")|(style=\".+?\")|(\\n))"
  s.replaceAll(regex, "")
}

by Haito at May 22, 2015 10:49 PM

/r/scala

StackOverflow

what's a good persistent collections framework for use in java?

By persistent collections I mean collections like those in clojure.

For example, I have a list with the elements (a,b,c). With a normal list, if I add d, my original list will have (a,b,c,d) as its elements. With a persistent list, when I call list.add(d), I get back a new list, holding (a,b,c,d). However, the implementation attempts to share elements between the list wherever possible, so it's much more memory efficient than simply returning a copy of the original list. It also has the advantage of being immutable (if I hold a reference to the original list, then it will always return the original 3 elements).

This is all explained much better elsewhere (e.g. http://en.wikipedia.org/wiki/Persistent_data_structure).

Anyway, my question is... what's the best library for providing this functionality for use in java? Can I use the clojure collections somehow (other that by directly using clojure)?

by bm212 at May 22, 2015 10:35 PM

Matthew Green

Attack of the week: Logjam

In case you haven't heard, there's a new SSL/TLS vulnerability making the rounds. Nicknamed Logjam, the new attack is 'special' in that it may admit complete decryption or hijacking of any TLS connection you make to an improperly configured web or mail server. Worse, there's at least circumstantial evidence that similar (and more powerful) attacks might already be in the toolkit of some state-level attackers such as the NSA.

This work is the result of an unusual collaboration between a fantastic group of co-authors spread all around the world, including institutions such as the University of Michigan, INRIA Paris-Rocquencourt, INRIA Paris-Nancy, Microsoft Research, Johns Hopkins and the University Of Pennsylvania. It's rare to see this level of collaboration between groups with so many different areas of expertise, and I hope to see a lot more like it. (Disclosure: I am one of the authors, but others did all the good bits.)

The absolute best way to understand the Logjam result is to read the technical research paper. This post is mainly aimed at people who want a slightly less technical form. For those with even shorter attention spans, here's the TL;DR:
It appears that the the Diffie-Hellman protocol, as currently deployed in SSL/TLS, may be vulnerable to a serious downgrade attack that restores it to 1990s "export" levels of security, and offers a practical "break" of the TLS protocol against poorly configured servers. Even worse, extrapolation of the attack requirements -- combined with evidence from the Snowden documents -- provides some reason to speculate that a similar attack could be leveraged against protocols (including TLS, IPSec/IKE and SSH) using 768- and 1024-bit Diffie-Hellman. 
I'm going to tackle this post in the usual 'fun' question-and-answer format I save for this sort of thing.
What is Diffie-Hellman and why should I care about TLS "export" ciphersuites?
Diffie-Hellman is probably the most famous public key cryptosystem ever invented. Publicly discovered by Whit Diffie and Martin Hellman in the late 1970s (and a few years earlier, in secret, by UK GCHQ), it allows two parties to negotiate a shared encryption key over a public connection.

Diffie-Hellman is used extensively in protocols such as SSL/TLS and IPSec, which rely on it to establish the symmetric keys that are used to transport data. To do this, both parties must agree on a set of parameters to use for the key exchange. In traditional ('mod p') Diffie-Hellman, these parameters consist of a large prime number p, as well as a 'generator' g. The two parties now exchange keys as shown below:

Classical Diffie-Hellman (source).
TLS supports several variants of Diffie-Hellman. The one we're interested in for this work is the 'ephemeral' non-elliptic ("DHE") protocol variant, which works in a manner that's nearly identical to the diagram above. The server takes the role of Alice, selecting (p, g, ga mod p) and signing this tuple (and some nonces) using its long-term signing key. The client responds gb mod p and the two sides then calculate a shared secret.

Just for fun, TLS also supports an obsolete 'export' variant of Diffie-Hellman. These export ciphersuites are a relic from the 1990s when it was illegal to ship strong encryption out of the country. What you need to know about "export DHE" is simple: it works identically to standard DHE, but limits the size of p to 512 bits. Oh yes, and it's still out there today. Because the Internet.
How do you attack Diffie-Hellman?
The best known attack against a correct Diffie-Hellman implementation involves capturing the value gand solving to find the secret key a. The problem of finding this value is known as the discrete logarithm problem, and it's thought to be a mathematically intractable, at least when Diffie-Hellman is implemented in cryptographically strong groups (e.g., when p is of size 2048 bits or more).

Unfortunately, the story changes dramatically when p is relatively small -- for example, 512 bits in length. Given a value gmod p for a 512-bit p, itshould at least be possible to efficiently recover the secret a and read traffic on the connection.
Most TLS servers don't use 512-bit primes, so who cares?
The good news here is that weak Diffie-Hellman parameters are almost never used purposely on the Internet. Only a trivial fraction of the SSL/TLS servers out there today will organically negotiate 512-bit Diffie-Hellman. For the most part these are crappy embedded devices such as routers and video-conferencing gateways.

However, there is a second class of servers that are capable of supporting 512-bit Diffie-Hellman when clients request it, using a special mode called the 'export DHE' ciphersuite. Disgustingly, these servers amount to about 8% of the Alexa top million sites (and a whopping 29% of SMTP/STARTLS mail servers). Thankfully, most decent clients (AKA popular browsers) won't willingly negotiate 'export-DHE', so this would also seem to be a dead end.

It isn't. 

ServerKeyExchange message (RFC 5246)
You see, before SSL/TLS peers can start engaging in all this fancy cryptography, they first need to decide which ciphers they're going to use. This is done through a negotiation process in which the client proposes some options (e.g., RSA, DHE, DHE-EXPORT), and the server picks one.

This all sound simple enough. However, one of the early, well known flaws in SSL/TLS is the protocol's failure to properly authenticate these 'negotiation' messages. In very early versions of SSL they were not authenticated at all. SSLv3 and TLS tacked on an authentication process -- but one that takes place only at the end of the handshake.* 

This is particularly unfortunate given that TLS servers often have the ability to authenticate their messages using digital signatures, but don't really take advantage of this. For example, when two parties negotiate Diffie-Hellman, the parameters sent by the server are transmitted within a signed message called the ServerKeyExchange (shown at right). The signed portion of this message covers the parameters, but neglects to include any information about which ciphersuite the server thinks it's negotiating. If you remember that the only difference between DHE and DHE-EXPORT is the size of the parameters the server sends down, you might start to see the problem.

Here it is in a nutshell: if the server supports DHE-EXPORT, the attacker can 'edit' the negotiation messages sent from the a client -- even if the client doesn't support export DHE -- replacing the client's list of supported ciphers with only export DHE. The server will in turn send back a signed 512-bit export-grade Diffie-Hellman tuple, which the client will blindly accept -- because it doesn't realize that the server is negotiating the export version of the ciphersuite. From its perspective this message looks just like 'standard' Diffie-Hellman with really crappy parameters. 

Overview of the Logjam active attack (source: paper).
All this tampering should run into a huge snag at the end of the handshake, when he client and server exchange Finished messages embedding include a MAC of the transcript. At this point the client should learn that something funny is going on, i.e., that what it sent no longer matches what the server is seeing. However, the loophole is this: if the attacker can recover the Diffie-Hellman secret quickly -- before the handshake ends -- she can forge her own Finished messages. In that case the client and server will be none the wiser.

The upshot is that executing this attack requires the ability to solve a 512-bit discrete logarithm before the client and server exchange Finished messages. That seems like a tall order.
Can you really solve a discrete logarithm before a TLS handshake times out?
In practice, the fastest route to solving the discrete logarithm in finite fields is via an algorithm called the Number Field Sieve (NFS). Using NFS to solve a single 512-bit discrete logarithm instance requires several core-years -- or about week of wall-clock time given a few thousand cores -- which would seem to rule out solving discrete logs in real time.

However, there is a complication. In practice, NFS can actually be broken up into two different steps:
  1. Pre-computation (for a given prime p). This includes the process of polynomial selection, sieving, and linear algebra, all of which depend only on p. The output of this stage is a table for use in the second stage.
  2. Solving to find a (for a given gmod p). The final stage, called the descent, uses the table from the precomputation. This is the only part of the algorithm that actually involves a specific g and ga.
The important thing to know is that the first stage of the attack consumes the vast majority of the time, up to a full week on a large-scale compute cluster. The descent stage, on the other hand, requires only a few core-minutes. Thus the attack cost depends primarily on where the server gets its Diffie-Hellman parameters from. The best case for an attacker is when p is hard-coded into the server software and used across millions of machines. The worst case is when p is re-generated routinely by the server.

I'll let you guess what real TLS servers actually do.

In fact, large-scale Internet scans by the team at University of Michigan show that most popular web servers software tends to re-use a small number of primes across thousands of server instances. This is done because generating prime numbers is scary, so implementers default to using a hard-coded value or a config file supplied by your Linux distribution. The situation for export Diffie-Hellman is particularly awful, with only two (!) primes used across up 92% of enabled Apache/mod_ssl sites.
Number of seconds to solve a
512-bit discrete log (source: paper).

The upshot of all of this is that about two weeks of pre-computation is sufficient to build a table that allows you to perform the downgrade against most export-enabled servers in just a few minutes (see the chart at right). This is fast enough that it can be done before the TLS connection timeout. Moreover, even if this is not fast enough, the connection can often be held open longer by using clever protocol tricks, such as sending TLS warning messages to reset the timeout clock.

Keep in mind that none of this shared prime craziness matters when you're using sufficiently large prime numbers (on the order of 2048 bits or more). It's only a practical issue you're using small primes, like 512-bit, 768-bit or -- and here's a sticky one I'll come back to in a minute -- 1024 bit.
How do you fix the downgrade to export DHE?
The best and most obvious fix for this problem is to exterminate export ciphersuites from the Internet. Unfortunately, these awful configurations are the default in a number of server software packages (looking at you Postfix), and getting people to update their configurations is surprisingly difficult (see e.g., FREAK).

A simpler fix is to upgrade the major web browsers to resist the attack. The easy way to do this is to enforce a larger minimum size for received DHE keys. The problem here is that the fix itself causes some collateral damage -- it will break a small but significant fraction of lousy servers that organically negotiate (non-export) DHE with 512 bit keys.

The good news here is that the major browsers have decided to break the Internet (a little) rather than allow it to break them. Each has agreed to raise the minimum size limit to at least 768 bits, and some to a minimum of 1024 bits. It's still not perfect, since 1024-bit DHE may not be cryptographically sound against powerful attackers, but it does address the immediate export attack. In the longer term the question is whether to use larger negotiated DHE groups, or abandon DHE altogether and move to elliptic curves.
What does this mean for larger parameter sizes?
The good news so far is that 512-bit Diffie-Hellman is only used by a fraction of the Internet, even when you account for active downgrade attacks. The vast majority of servers use Diffie-Hellman moduli of length at least 1024 bits. (The widespread use of 1024 is largely due to a hard-cap in older Java clients. Go away Java.)

While 2048-bit moduli are generally believed to be outside of anyone's reach, 1024-bit DHE has long been considered to be at least within groping range of nation-state attackers. We've known this for years, of course, but the practical implications haven't been quite clear. This paper tries to shine some light on that, using Internet-wide measurements and software/hardware estimates.

If you recall from above, the most critical aspect of the NFS attack is the need to perform large amounts of pre-computation on a given Diffie-Hellman prime p, followed by a relatively short calculation to break any given connection that uses p. At the 512-bit size the pre-computation only requires about a week. The question then is, how much does it cost for a 1024-bit prime, and how common are shared primes?

While there's no exact way to know how much the 1024-bit attack would cost, the paper attempts to provide some extrapolations based on current knowledge. With software, the cost of the pre-computation seems quite high -- on the order of 35 million core-years. Making this happen for a given prime within a reasonable amount of time (say, one year) would appear to require billions of dollars of computing equipment if we assume no algorithmic improvements. Even if we rule out such improvements, it's conceivable that this cost might be brought down to a few hundred million dollars using hardware. This doesn't seem out of bounds when you consider leaked NSA cryptanalysis budgets.

What's interesting is that the descent stage, required to break a given Diffie-Hellman connection, is much faster. Based on some implementation experiments by the CADO-NFS team, it may be possible to break a Diffie-Hellman connection in as little as 30 core-days, with parallelization hugely reducing the wall-clock time. This might even make near-real-time decryption of Diffie-Hellman connections practical.
Is the NSA actually doing this?
So far all we've noted is that NFS pre-computation is at least potentially feasible when 1024-bit primes are re-used. That doesn't mean the NSA is actually doing any of it.

There is some evidence, however, that suggests the NSA has decryption capability that's at least consistent with such a break. This evidence comes from a series of Snowden documents published last winter in Der Spiegel. Together they describe a large-scale effort at NSA and GCHQ, capable of decrypting 'vast' amounts of Internet traffic, including IPSec, SSH and HTTPS connections.

NSA slide illustrating exploitation
of IPSec encrypted traffic (source: Spiegel).
While the architecture described by the documents mentions attacks against many protocols, the bulk of the energy seems to be around the IPSec and IKE protocols, which are used to establish Virtual Private Networks (VPNs) between individuals and corporate networks such as financial institutions.

The nature of the NSA's exploit is never made clear in the documents, but diagram at right gives a lot of the architectural details. The system involves collecting Internet Key Exchange (IKE) handshakes, transmitting them to the NSA's Cryptanalysis and Exploitation Services (CES) enclave, and feeding them into a decryption system that controls substantial high performance computing resources to process the intercepted exchanges. This is at least circumstantially consistent with Diffie-Hellman cryptanalysis.

Of course it's entirely possible that the attack is based on a bad random number generator, weak symmetric encryption, or any number of engineered backdoors. There are a few pieces of evidence that militate towards a Diffie-Hellman break, however:

  1. IPSec (or rather, the IKE key exchange) uses Diffie-Hellman for every single connection, meaning that it can't be broken without some kind of exploit, although this doesn't rule out the other explanations.
  2. The IKE exchange is particularly vulnerable to pre-computation, since IKE uses a small number of standardized prime numbers called the Oakley groups, which are going on 17 years old now. Large-scale Internet scanning by the Michigan team shows that a majority of responding IPSec endpoints will gladly negotiate using Oakley Group 1 (768 bit) or Group 2 (1024 bit), even when the initiator offers better options.
  3. The NSA's exploit appears to require the entire IKE handshake as well as any pre-shared key (PSK). These inputs would be necessary for recovery of IKEv1 session keys, but are not required in a break that involves only symmetric cryptography.
  4. The documents explicitly rule out the use of malware, or rather, they show that such malware ('TAO implants') is in use -- but that malware allows the NSA to bypass the IKE handshake altogether.
I would stipulate that beyond the Internet measurements and computational analysis, this remains firmly in the category of  'crazy-eyed informed speculation'. But while we can't rule out other explanations, this speculation is certainly consistent with a hardware-optimized break of Diffie-Hellman 768 and 1024-bit, along with some collateral damage to SSH and related protocols.
So what next?
The paper gives a detailed set of recommendations on what to do about these downgrade attacks and (relatively) weak DHE groups. The website provides a step-by-step guide for server administrators. In short, probably the best long-term move is to switch to elliptic curves (ECDHE) as soon as possible. Failing this, clients and servers should enforce at least 2048-bit Diffie-Hellman across the Internet. If you can't do that, stop using common primes.

Making this all happen on anything as complicated as the Internet will probably consume a few dozen person-lifetimes. But it's something we have to do, and will do, to make the Internet work properly.

Notes:

* There are reasons for this. Some SSL/TLS ciphersuites (such as the RSA encryption-based ciphersuites) don't use signatures within the protocol, so the only way to authenticate the handshake is to negotiate a ciphersuite, run the key exchange protocol, then use the resulting shared secret to authenticate the negotiation messages after the fact. But SSL/TLS DHE involves digital signatures, so it should be possible to achieve a stronger level of security than this. It's unfortunate that the protocol does not.

by Matthew Green (noreply@blogger.com) at May 22, 2015 10:18 PM

CompsciOverflow

Stateless pseudorandom/hash algorithm for ℤⁿ → [0,1)

There's a problem which I had on a number of unrelated occasions which I usually work around in some way or another. Deep inside, though, it bothers me that I haven't ever found a “proper” solution to this:

Given an n-tuple of integers, how do I derive a stateless pseudorandom number / hash value in the half-open interval [0,1)?

The usual “solution” people use in this situation is to multiply the parameters by some large primes, or bit-shift them, or do any other complex calculation which produces a result that is “random enough” for the given purpose. I guess these algorithms have been cargo-culted over a long time, maybe from a PRNG implementation.

Instead of copying one of the circulating code snippets, I'd like to really understand what's going on here. Is there an algorithm or a set of algorithms which should typically be used in this kind of situation, and if so, what's the proper way to choose the constants?

by user3426575 at May 22, 2015 10:13 PM

DataTau

Planet Emacsen

(or emacs: Ivy-mode 0.5.0 is out

At this point, swiper is only a fraction of ivy-mode's functionality. Still, it's nice to keep them all, together with counsel, in a single repository: counsel-git-grep works much better this way.

Anyway, I'll echo the release notes here, there are quite a few exciting new features.

Fixes

  • TAB shouldn't delete input when there's no candidate.
  • TAB should switch directories properly.
  • require dired when completing file names, so that the directory face is loaded.
  • TAB should work with confirm-nonexistent-file-or-buffer.
  • TAB should handle empty input.
  • work around grep-read-files: it should be possible to simply M-x rgrep RET RET RET.
  • Fix the transition from a bad regex to a good one - you can input a bad regex to get 0 candidates, the candidates come back once the regex is fixed.
  • ivy-switch-buffer should pre-select other-buffer just like switch-buffer does it.
  • Fix selecting "C:\" on Windows.
  • counsel-git-grep should warn if not in a repository.
  • C-M-n shouldn't try to call action if there isn't one.
  • Turn on sorting for counsel-info-lookup-symbol.
  • ivy-read should check for an outdated cons initial-input.

New Features

Out of order matching

I actually like in-order matching, meaning the input "in ma" will match "in-order matching", but not "made in". But the users can switch to out-of-order matching if they use this code:

(setq ivy-re-builders-alist
          '((t . ivy--regex-ignore-order)))

ivy-re-builders-alist is the flexible way to customize the regex builders per-collection. Using t here, means to use this regex builder for everything. You could choose to have in-order for files, and out-of-order for buffers and so on.

New defcustom: ivy-tab-space

Use this to have a space inserted each time you press TAB:

(setq ivy-tab-space t)

ignore case for TAB

"pub" can expand to "Public License".

New command: counsel-load-library

This command is much better than the standard load-libary that it upgrades. It applies a sort of uniquify effect to all your libraries, which is very useful:

counsel-load-library

In this case, I have avy installed both from the package manager and manually. I can easily distinguish them.

Another cool feature is that instead of using find-library (which is also bad, since it would report two versions of avy with the same name and no way to distinguish them), you can simply use counsel-load-library and type C-. instead of RET to finalize.

Here's another scenario: first load the library, then call ivy-resume and immediately open the library file.

New command: ivy-partial

Does a partial complete without exiting. Use this code to replace ivy-partial-or-done with this command:

(define-key ivy-minibuffer-map (kbd "TAB") 'ivy-partial)

Allow to use ^ in swiper

In regex terms, ^ is the beginning of line. You can now use this in swiper to filter your matches.

New command: swiper-avy

This command is crazy good: it combines the best features of swiper (all buffer an once, flexible input length) and avy (quickly select one candidate once you've narrowed to about 10-20 candidates).

For instance, I can enter "to" into swiper to get around 10 matches. Instead of using C-n a bunch of times to select the one of 10 that I want, I just press C-', followed by a or s or d ... to select one of the matches visible on screen.

So both packages use their best feature to cover up the others worst drawback.

Add support for virtual buffers

I was never a fan of recentf until now. The virtual buffers feature works in the same way as ido-use-virtual-buffers: when you call ivy-switch-buffer, your recently visited files as well as all your bookmarks are appended to the end of the buffer list.

Suppose you killed a buffer and want to bring it back: now you do it as if you didn't kill the buffer and instead buried it. The bookmarks access is also nice.

Here's how to configure it, along with some customization of recentf:

(setq ivy-use-virtual-buffers t)

(use-package recentf
  :config
  (setq recentf-exclude
        '("COMMIT_MSG" "COMMIT_EDITMSG" "github.*txt$"
          ".*png$"))
  (setq recentf-max-saved-items 60))

Add a few wrapper commands for the minibuffer

All these commands just forward to their built-in counterparts, only trying not to exit the first line of the minibuffer.

  • M-DEL calls ivy-backward-kill-word
  • C-d calls ivy-delete-char
  • M-d calls ivy-kill-word
  • C-f calls ivy-forward-char

Allow to customize the minibuffer formatter

See the wiki on how to customize the minibuffer display to look like this:

100 Find file: ~/
  file1
  file2
> file3
  file4

When completing file names, TAB should defer to minibuffer-complete

Thanks to this, you can TAB-complete your ssh hosts, e.g.:

  • /ss TAB -> /ssh
  • /ssh:ol TAB -> /ssh:oleh@

More commands work with ivy-resume

I've added:

  • counsel-git-grep
  • counsel-git

Others (that start with counsel-) should work fine as well. Also don't forget that you can use C-M-n and C-M-p to:

  • switch candidate
  • call the action for the candidate
  • stay in the minibuffer

This is especially powerful for counsel-git-grep: you can easily check the whole repository for something with just typing in the query and holding C-M-n. The matches will be highlighted swiper-style, of course.

Allow to recenter during counsel-git-grep

Use C-l to recenter.

Update the quoting of spaces

Split only on single spaces, from all other space groups, remove one space.

As you might know, a space is used in place of .* in ivy. In case you want an actual space, you can now quote them even easier.

Outro

Thanks to all who contributed, check out the new stuff, and make sure to bind ivy-resume to something short: it has become a really nice feature.

by (or emacs at May 22, 2015 10:00 PM

StackOverflow

Can you formulate a monoid or semigroup for the radix sort?

This is the pseudocode for the radix sort:

Pseudocode for Radix Sort:
Radix-Sort(A, d)
// Each key in A[1..n] is a d-digit integer. (Digits are
// numbered 1 to d from right to left.)
1. for i = 1 to d do
Use a stable sorting algorithm to sort A on digit i.

This is the Scala code for the radix sort:

object RadixSort {
  val WARP_SIZE = 32

  def main(args: Array[String]) = {
    var A = Array(123,432,654,3123,654,2123,543,131,653,123)

    radixSortUintHost(A, 4).foreach(i => println(i))
  }

  // LSB radix sort
  def radixSortUintHost(A: Array[Int], bits: Int): Array[Int] = {
    var a = A
    var b = new Array[Int](a.length)

    var rshift = 0
    var mask = ~(-1 << bits)

    while (mask != 0) {
      val cntArray = new Array[Int](1 << bits)

      for (p <- 0 until a.length) {
        var key = (a(p) & mask) >> rshift
        cntArray(key)+= 1
      }

      for (i <- 1 until cntArray.length)
        cntArray(i) += cntArray(i-1)

      for (p <- a.length-1 to 0 by -1) {
        var key = (a(p) & mask) >> rshift
        cntArray(key)-= 1
        b(cntArray(key)) = a(p)
      }

      val temp = b
      b = a
      a = temp

      mask <<= bits
      rshift += bits
    }

    b
  }
}

This is the Haskell code for the radix sort:

import Data.Bits (Bits(testBit, bitSize))
import Data.List (partition)

lsdSort :: (Ord a, Bits a) => [a] -> [a]
lsdSort = fixSort positiveLsdSort

msdSort :: (Ord a, Bits a) => [a] -> [a]
msdSort = fixSort positiveMsdSort

-- Fix a sort that puts negative numbers at the end, like positiveLsdSort and positiveMsdSort
fixSort sorter list = uncurry (flip (++)) (break (< 0) (sorter list))

positiveLsdSort :: (Bits a) => [a] -> [a]
positiveLsdSort list = foldl step list [0..bitSize (head list)] where
step list bit = uncurry (++) (partition (not . flip testBit bit) list)

positiveMsdSort :: (Bits a) => [a] -> [a]
positiveMsdSort list = aux (bitSize (head list) - 1) list where
aux _ [] = []
aux (-1) list = list
aux bit list = aux (bit - 1) lower ++ aux (bit - 1) upper where
    (lower, upper) = partition (not . flip testBit bit) list

My question is: Can you formulate a monoid or semigroup for the radix sort?

by hawkeye at May 22, 2015 09:55 PM

DragonFly BSD Digest

Haswell +, power–

A recent commit from Matthew Dillon means users of Intel Haswell or later CPUs will see reduced power usage, if I’m reading this commit correctly.

by Justin Sherrill at May 22, 2015 09:53 PM

Lobsters

Userscript to hide/show child comments on Lobste.rs

A pretty simple userscript that adds an expand/collapse button that shows/hides children comments.

I built it because I’ve noticed that as more users actively participate in discussions, it can be a little annoying to scroll past some of the bigger comment chains (like the main one on the ruby community post).

Tested on Chrome with Tampermonkey. It doesn’t work with Chrome’s userscript -> extension conversion right now. I fixed the issue with that, you can now import it as an extension.

Repo here: https://bitbucket.org/stupermundi/lobste.rs-comment-hider/overview

Comments

by tesla at May 22, 2015 09:39 PM

StackOverflow

How do I fix this dependency issue in Clojure?

I'm having a lot of trouble fixing an issue where the dependencies for two different packages are colliding. My project.clj's dependencies look like this:

  :dependencies [[org.clojure/clojure "1.6.0"]
                 [itsy "0.1.1"]  
                 [amazonica "0.3.22" :exclusions [commons-logging org.apache.httpcomponents/httpclient com.fasterxml.jackson.core/jackson-core]]])

My namespace looks like this:

(ns crawler.core
  (:require [itsy.core :refer :all])
  (:require [itsy.extract :refer :all])
  (:use  [amazonica.core]
         [amazonica.aws.s3]))

When I try to load the namespace into lein's repl with (load crawler/core), I get this error:

CompilerException java.lang.NoSuchMethodError: com.fasterxml.jackson.core.JsonFactory.requiresPropertyOrdering()Z, compiling:(amazonica/core.clj:1:1)

Online sources suggest that this is a dependency mismatch. How do I fix it?

by user592419 at May 22, 2015 09:24 PM

Combine two default Ansible host files including one being ec2.py?

I'm using Ansible is a mixed environment of AWS and non-AWS machines. I'd like to avoid passing hosts on the command line. How do I combine multiple host files in Ansible and make it the default? The current recommendation on the Ansible site is to override /etc/ansible/hosts with ec2.py. which prevents me from adding additional hosts. Thanks.

by Josh Unger at May 22, 2015 09:22 PM

QuantOverflow

Yield and interest rate? [on hold]

Are they the same thing?

Is yield the annualized return rate?

Why when yield rise, yearly return increases but price falls?

by maximum at May 22, 2015 09:20 PM

StackOverflow

How to convert from Map[String,Any] to (String, String)*

A function takes an input as follow:

myFunction("param1" -> "value1", "param2" -> "value2")

Parameter type in myFunction is (String,String)*. Now, I want to store these parameters in a map object like this:

val p = Map("param1" -> "value1", "param2" -> "value2")

The reason is because I want to pass p around before I pass it into myFunction like this: myFunction([converting p to (String,String)* here]) and I cannot change the parameter type of myFunction. How can I convert p to (String, String)*?

by Hoang Ong at May 22, 2015 09:10 PM

Fefe

Das FBI kann keinen einzigen Terrorfall nennen, den ...

Das FBI kann keinen einzigen Terrorfall nennen, den sie wegen der Patriot-Act-Schnüffelbefugnisse gelöst hätten.
Inspector General Michael E. Horowitz said that between 2004 and 2009, the FBI tripled its use of bulk collection under Section 215 of the Patriot Act, which allows government agents to compel businesses to turn over records and documents, and increasingly scooped up records of Americans who had no ties to official terrorism investigations.
Wie immer halt. Wo ein Trog ist, sammeln sich die Schweine.

May 22, 2015 09:01 PM

Kennt ihr Clarke and Dawe? Ein großartiges Paar alter ...

Kennt ihr Clarke and Dawe? Ein großartiges Paar alter Männer aus Australien, die da … mhh, Comedy machen? Kabarett? Politik-Kommentar? Tja, wie auch immer man das nennen will. Bekannt geworden sind sie mit The Front Fell Off, aber ihre neueren Werke sind auch unglaublich gut. Hier ist ein weiterer Klassiker zum Tod von Margaret Thatcher.

Und jetzt finde ich gerade dieses Video von den beiden über Immigration, und das kann ich nur jedem wärmstens empfehlen. Viel Spaß!

Update: Und wenn ihr schon dabei seid, guckt auch den über Quantitative Easing :-)

May 22, 2015 09:01 PM

QuantOverflow

Markowitz portfolio optimization question

I am studying the Markowitz portfolio optimization theory, and I just wanted to ask if I understood this correctly. For a stock portfolio we distinguish two kinds of risks: an unsystematic risk, which is due to the correlations between the stocks and which can be minimized by diversification, and a systematic risk, which is due to general trends in the market and which cannot be reduced by diversification.

So, Markowitz portfolio optimization is a procedure to minimize this unsystematic risk by choosing appropriate weights. Right? It only deals with the unsystematic risk and not with the systematic one. Is there a way to reduce the systematic risk?

Thanks.

by ani at May 22, 2015 09:00 PM

StackOverflow

Reading the filesystem, should it be asynchronous?

From within a controller action, I want to read the filesystem. The files will contain content, so they average file size will vary and will be I would guess a few pages of "article" type content size.

Since everything should be asynchronous, does play have a built-in method to fetch files from the filesystem that is asynchronous?

by cool breeze at May 22, 2015 08:58 PM

UnixOverflow

Configure the network interface to send frames with 802.3 format on freebsd

Is it possible to configure my Freebsd (or Linux) so that all frames are sent in the 802.3 frame format (with the LLC part of 802.2) rather than Ethernet 2 frame format ?

Thank you

by nuggets at May 22, 2015 08:47 PM

Lobsters

StackOverflow

What this functions in Scheme language do?

I'm a newbie and I didn't understand very well the language. Could anyone please explain to me what this functions do?

First function:

(define (x l)
    (cond
        ((null? l) 0)
        ((list? (car l))
                 (+ (x (car l)) (x (cdr l))))
        (else    (+ 1 (x (cdr l))))
))

Second function:

(define (x l)
    (cond
        ((null? l) 0)
        ((list? (car l))
                 (+ (x (car l)) (x (cdr l))))
        (else    (+ (car l) (x (cdr l)))
))

I do understand the begining but the conditions I didn't understand. Any help?

by U23r at May 22, 2015 08:08 PM

Lobsters

Fefe

Herzlichen Glückwunsch an Monitor zum 50. Geburtstag. ...

Herzlichen Glückwunsch an Monitor zum 50. Geburtstag. Wenn zufällig jemand einen Mitschnitt der am Anfang bei 2:10 referenzierten Sendung "Lauschen für Amerika - Die Abhöraffäre -" hat, würde ich mich über die Bereitstellung einer Privatkopie freuen. Kennt vielleicht jemand jemanden beim WDR, der das mal freigeben könnte? Das ist ja nun echt ein Dokument der Zeitgeschichte, das sollte frei im Internet stehen.

Update: Ich scheine mich zu unklar auszudrücken. Ich will nicht einen Mitschnitt dieser Sendung, und ich will auch nicht den Clip aus dieser Sendung haben. Ich will die Sendung aus den 70er Jahren haben, aus der sie den Clip bei 2:10 zeigen.

May 22, 2015 08:01 PM

StackOverflow

How to wrap oAuth headers in clj-http?

I'm trying to post a twitter status update with clojure... but this probably really is a question about oAuth headers, and using it through the wonderful clj-http library.

I've used clj-http before for basic-auth and other type of headers and it was fairly straightforward. The Authorization: Bearer ... was a bit confusing but I got there in the end.

For twitter though, I am supposed to pass a lot of variables in Authorization and I am not quite sure how I'd go about doing that. Here's the curl command, according to twitter, that I'll need to post the tweet:

curl --request 'POST' 'https://api.twitter.com/1.1/statuses/update.json' 
     --data 'status=this+is+sparta' 
     --header 'Authorization: OAuth oauth_consumer_key="xxxxx",
                              oauth_nonce="xxxxx", 
                              oauth_signature="xxxxx", 
                              oauth_token="xxxxx", oauth_version="1.0"' 
     --verbose

So I am not sure how I would append all the oAuth... things in the header. trying with (str ..) isn't really working out. Here's what I've got so far:

(client/post "https://api.twitter.com/1.1/statuses/update.json")
             {:headers {:Authorization (str "OAuth oauth_consumer_key='xxxxx', oauth_nonce='xxxxx', oauth_signature='xxxxx', oauth_token='xxxxx', oauth_version='1.0'" )}
              :form-params {:status "This is sparta"})

This returns 403. permission error when I try.

Any ideas on how I'd construct that query?

ps: I do have the oAuth token and token_secret for the account... but I notice the token_secret value isn't being passed? and what does oauth_nonce for? I'm for now passing the value that twitter gave me for curl... looking around it seems that it is just a random value so not that fussed about it.

by LocustHorde at May 22, 2015 07:59 PM

CompsciOverflow

The Change-making problem algorithm proof (at the dynamic programming method)

I saw here the algorithm for the "Change-making problem" (at the dynamic programming method). I saw it here: http://www.columbia.edu/~cs2035/courses/csor4231.F07/dynamic.pdf

I'm trying to find a proof for this algorithm. I try to prove it with induction but I think that it wont work at all...

I'd like if you will help me how to prove it. If you have source at the internet it will be good to..

Thank you!

P.S. 1 - I don't need the proof of the greedy algorithm :-) P.S. 2 - I post it at the Math SE but someone ther told me that this Q should be here...

Thank you again!

by Yoar at May 22, 2015 07:35 PM

StackOverflow

Play: How to prevent the body parser from being invoked in case the action code does not get executed

I've created a custom Action that prevents unauthorized users from accessing protected functionality:

class SecureAction extends ActionBuilder[SecureRequest] {

  def invokeBlock[A](request: Request[A], block: SecureRequest[A] => Future[Result]) = {
    ...
    future.flatMap {
      case token if (!isAuthorized(token)) =>
        Logger.info(s"request ${request.path} not authorized: user ${token.username} does not have required privileges")
        Future.successful(Unauthorized(error(requestNotAuthorized))))
      case ...
    }
  }
}

If the current user is not authorized, then SecureAction returns Unauthorized and never executes the provided action code. Below is how my Controller looks like:

object MyController extends Controller {

  ...

  def saveFile = SecureAction.async(fsBodyParser) { implicit request =>

    // code here not executed if current user has not required privileges
    ...
  }
}

The problem is that even if the current user is not authorized and SecureAction returns Unauthorized without executing the action code, the body parser gets still invoked... and this is not what I was expecting.

That said, the question is: how do I prevent the body parser (i.e fsBodyParser) from being invoked in case SecureAction returns Unauthorized?

by j3d at May 22, 2015 07:30 PM

fluent docker port connection error [on hold]

help me with the step to configure through fluentd docker container to read the syslog and write to host location. I have tried to use in_syslog plugin. But it doesn't seems to be work.

Thanks in advance

by ashok at May 22, 2015 07:29 PM

CompsciOverflow

Dijsktra's algorithm example

In the following example, how does Dijkstra's algorithm find the shortest path?

enter image description here

I think we'll get abedz, while the shortest should be acedz.

by qed at May 22, 2015 07:08 PM

StackOverflow

How install and use scalaj-http

I have an assignment to use some http client. I am planning to use scalaj-http for that. The installation page for that https://github.com/scalaj/scalaj-http, it says:

Installation in your build.sbt

libraryDependencies += "org.scalaj" %% "scalaj-http" % "1.1.4"

But nothing is clearly mention, where do i need to do and what needs to be done? Is it command, or we have to paste this somewhere.

Can someone explain in detail?

by Sahil Sharma at May 22, 2015 07:04 PM

Fefe

Wisst ihr eigentlich, wer so im Aufsichtsrat der Deutschen ...

Wisst ihr eigentlich, wer so im Aufsichtsrat der Deutschen Bahn sitzt?

Auffällig ist auf den ersten Blick:

Alexander Kirchner*
Stellvertretender Vorsitzender des Aufsichtsrates
Vorsitzender der Eisenbahn- und Verkehrsgewerkschaft (EVG)
Runkel
Oh, ach? Noch jemand von der EVG? Aber ja!
Klaus Dieter Hommel*
Stellvertretender Vorsitzender der Eisenbahn- und Verkehrsgewerkschaft (EVG)
Neuenhagen

Regina Rusch-Ziemba*
Stellvertretende Vorsitzende
Eisenbahn- und Verkehrsgewerkschaft (EVG)
Hamburg

Hmm, na dann werden ja sicher auch ein paar von der GdL im Aufsichtsrat sitzen, oder?

Hmm … nein. Nicht ein einziger.

Aber dafür sitzt dort:

Kirsten Lühmann
Mitglied des Deutschen Bundestages
Hermannsburg
Ja! Die eine SPD-Gegenstimme! Meine Hochachtung vor dieser Frau wächst!

May 22, 2015 07:01 PM

"EU dropped pesticide laws due to US pressure over ...

"EU dropped pesticide laws due to US pressure over TTIP, documents reveal". Um was für Pestizide ging es da?
US trade officials pushed EU to shelve action on endocrine-disrupting chemicals linked to cancer and male infertility to facilitate TTIP free trade deal
Na ist ja auch irgendwie klar. Wen interessieren schon Krebs und Unfruchtbarkeit, wenn man dafür Chlorhühnchen und Schiedsgerichte kriegt!1!!

May 22, 2015 07:01 PM

DataTau

StackOverflow

Scala Pattern Matching on Different type of Seq [duplicate]

I have 3 functions, kk expect Array[Byte] or List[Array[Byte]], So I did a pattern matching,

def gg (a :Array[Byte]) = "w"
def ff (a :List[Array[Byte]]) = "f"

def kk(datum : Any) = datum match {
  case w : Array[Byte] => gg(w)
  case f :List[Array[Byte]] => ff(f)
  case _ => throw new Exception("bad data")
}

and I get an error when I try to compile the code:

non-variable type argument List[Any] in type pattern List[List[Any]] (the underlying of List[List[Any]]) is unchecked since it is eliminated by erasure

so instead I construct my kk function as flowing and it can be compiled now:

def kk(datum : Any) = datum match {
  case w : Array[Byte] => gg(w)
  case f :List[_] => ff(f.asInstanceOf[List[Array[Byte]]])
  case _ => throw new Exception("bad data")}

My questions: 1: is my current version of kk is idiomatic way to do a pattern matching for List, if not can someone show to how to do it?

2: say if I want to pattern matching List[Any] and List[List[Any]], how am able to do that? (ff(f.asInstanceOf[List[Array[Byte]]]) may cause an error if I datum is type of List[Byte])

by EdwinGuo at May 22, 2015 06:57 PM

CompsciOverflow

Why is binary search called binary search?

I heard several possible explanations, so I would like some trustable reference.

Update 05.19: I'm interested in the question because one of mine students wrote in his thesis that the name comes from the below explanation (1). Until now I thought/heard that it comes from explanation (2). I would feel bad both for letting the wrong thing in his thesis, as well as telling him to remove it if it might be right.

(1) Consider the search for an integer in the interval $[0,2^{n-1}]$. We can find it using $n$ questions by asking in step $i$ the $i^{th}$ binary digit of the number.

(2) If we have a search space with $2^n$ elements, we can find an unknown element by questions that repeatedly split the remaining part of the space in two.

And yes, I know that (2) can give the same algorithm as (1) but that's not the point here. (2) can be also applied for more general problems.

by domotorp at May 22, 2015 06:48 PM

QuantOverflow

Portfolio software that shows 'total return' for each investment

I'm a high school technology teacher and sponsor for the Charity Student Investment Project. Currently our students track our investment portfolio via a google spreadsheet (http://charitystudentinvestmentproject.com/index.php/portfolio) that calculates unrealized capital gain, total ROI (excluding dividend payments), market value, etc..

We are looking for a solution that will allow us to track the 'total return' including dividend payments for our entire portfolio and each individual investment. A bonus feature would be the ability to share the report on the internet, but this feature is not a requirement.

Any suggestions for free or low-cost solutions for calculating 'total return', including dividend payments?

Basically, I want a solution that 'automatically' calculates (total dividend payments + unrealized gain) to show the 'total return' of a current investment.

Thanks, Todd

A little background on the Charity Student Investment Project. We are a high school student organization and part of a 501 c3. Our mission is to teach students about personal finance and how to invest in the stock markets via real-world experience and actively managing a real-money student-managed investment portfolio.

by Todd Benrud at May 22, 2015 06:47 PM

StackOverflow

Error not found: value ebeanEnabled

Getting an value not found error with the line ebeanenabled := false.

It also states:

Type error in expression

Working in Play Framework 2.3.9. Any ideas on why this might be happening?

I've tried to add

ebean.default="models.*"

to the application.conf file as suggested on the Play website but that did not fix the error.

by orangepanda at May 22, 2015 06:37 PM

How do you compare Fluentd vs. SnapLogic?

What are the pros and cons when you compare FLuentd with SnapLogic?

by FZF at May 22, 2015 06:35 PM

Chisel: Access to Module Parameters from Tester

How does one access the parameters used to construct a Module from inside the Tester that is testing it?

In the test below I am passing the parameters explicitly both to the Module and to the Tester. I would prefer not to have to pass them to the Tester but instead extract them from the module that was also passed in.

Also I am new to scala/chisel so any tips on bad techniques I'm using would be appreciated :).

import Chisel._
import math.pow

class TestA(dataWidth: Int, arrayLength: Int) extends Module {
  val dataType = Bits(INPUT, width = dataWidth)
  val arrayType = Vec(gen = dataType, n = arrayLength)
  val io = new Bundle {
    val i_valid = Bool(INPUT)
    val i_data = dataType
    val i_array = arrayType
    val o_valid = Bool(OUTPUT)
    val o_data = dataType.flip
    val o_array = arrayType.flip
  }
  io.o_valid := io.i_valid
  io.o_data := io.i_data
  io.o_array := io.i_array
}

class TestATests(c: TestA, dataWidth: Int, arrayLength: Int) extends Tester(c) {
  val maxData = pow(2, dataWidth).toInt
  for (t <- 0 until 16) {
    val i_valid = rnd.nextInt(2)
    val i_data = rnd.nextInt(maxData)
    val i_array = List.fill(arrayLength)(rnd.nextInt(maxData))
    poke(c.io.i_valid, i_valid)
    poke(c.io.i_data, i_data)
    (c.io.i_array, i_array).zipped foreach {
      (element,value) => poke(element, value)
    }
    expect(c.io.o_valid, i_valid)
    expect(c.io.o_data, i_data)
    (c.io.o_array, i_array).zipped foreach {
      (element,value) => poke(element, value)
    }
    step(1)
  }
}    

object TestAObject {
  def main(args: Array[String]): Unit = {
    val tutArgs = args.slice(0, args.length)
    val dataWidth = 5
    val arrayLength = 6
    chiselMainTest(tutArgs, () => Module(
      new TestA(dataWidth=dataWidth, arrayLength=arrayLength))){
      c => new TestATests(c, dataWidth=dataWidth, arrayLength=arrayLength)
    }
   }
}

by Ben Reynwar at May 22, 2015 06:34 PM

/r/emacs

StackOverflow

Parsing multipart/form-data using Apache Commons File Upload

Does Apache Commons File Upload package provides a generic interface to stream parse multipart/form-data chunks via InputStream, appending Array<Byte>, or via any other generic streaming interface?

I know they have a streaming API but the example only shows you how to do that via ServletFileUpload, which I reckon must be specific to Servlet.

If not, are there any other alternative frameworks in JVM that lets you do exactly this? Sadly, the framework that I am using, Spray.io, doesn't seem to provide a way to do this.

by lolski at May 22, 2015 06:18 PM

How write to stdout using assembly?

I'm having troubles trying to write the text "hi!" in the 'stdout'. I wrote this code using the default calling convention of system calls for Freebsd (FreeBSD Developers' Handbook: 11.3.1) and my newbie assembly skills.

Here is the code(at&t format):

.data
        str:
        .ascii "hi!"

.text

.globl main

main:
        pushl $0x3      # size
        pushl $str      # *buf
        pushl $0x1      # fd
        movl $0x4,%eax  # write
        int $0x80

        movl $0x1,%eax
        movl $0x0,%ebx
        int $0x80

The system is a FreeBSD 9 x86.

by Some_dude at May 22, 2015 06:17 PM

/r/netsec

CompsciOverflow

User input is entered vertically and not horizontally in C++ Need help [on hold]

So recently I've been practicing C++ and I've noticed that when I do a printf and scanf in C, then the user input is entered horizontally. But as soon as I do that in C++ (cout and cin) then it's entered vertically instead. This is really bugging me because I'm a neat freak and it looks really messy when doing the user input loop. Would appreciate if someone could help me with this issue. Here's the program I made in C++:

#include <iostream>

using namespace std;

int main()

{

int total = 0;
int i;
int sum = 0;

cout << "\n" << endl;

for (i = 1; i <= 4; i++)

{

    cout << "Enter number:" << endl;
    cin >> total;

    sum += total;

}

cout << "\n" << endl;

cout << "The sum is: " << sum << endl;

cout << "\n" << endl;

}

by Daniel Morris at May 22, 2015 06:13 PM

/r/scala

What is your relationship to CanBuildFrom?

To be clear, I'm not looking for an explanation.

I recently gave a presentation at work aimed at lower intermediate Scala developers, specifically on the collections framework and how CanBuildFrom works.

No doubt there are definitely experts here that already know what it is. But I'm wondering what the overall fluency of the Scala community here is. I'm under the impression that CanBuildFrom is this sort of... herald? Like if you know it, then you're something amazing. But it's also intimidating for beginners? I'm trying to make that (and many other software/Scala things) more accessible.

Do you know what it is? Do you understand how it works? Have you ever used breakOut?

For the pros in the house, have you ever extended the collections framework and written your own? Why and for what purpose?

I'm hoping to transition my presentation into something that I can deliver somewhere outside of work and post online! Then we can all be CanBuildFrom experts!

submitted by hyperforce
[link] [15 comments]

May 22, 2015 06:07 PM

StackOverflow

How to build and run Scala Spark locally

I'm attempting to build Apache Spark locally. Reason for this is to debug Spark methods like reduce. In particular I'm interested in how Spark implements and distributes Map Reduce under the covers as I'm experiencing performance issues and I think running these tasks from source is best method of finding out what the issue is.

So I have cloned the latest from Spark repo :

git clone https://github.com/apache/spark.git

Spark appears to be a Maven project so when I create it in Eclipse here is the structure :

enter image description here

Some of the top level folders also have pom files :

enter image description here

So should I just be building one of these sub projects ? Are these correct steps for running Spark against a local code base ?

by blue-sky at May 22, 2015 06:04 PM

Fefe

Neben den ganzen Klimaleugner- und "Lasst uns mal das ...

Neben den ganzen Klimaleugner- und "Lasst uns mal das Great Barrier Reef kaputtmachen"-Geschichten aus Australien kann man leicht übersehen, dass die auch innenpolistisch gerade voll rechts abgebogen sind. Konkret geht es um den Defence Trade Controls Act, unter den auch Krypto fällt.
The law does not define the word precisely, but the Department of Defence suggests that merely explaining an algorithm could constitute “intangible supply”.
Mit anderen Worten:
Thus, an Australian professor emailing an American collaborator or postgraduate student about a new applied cryptography idea, or explaining a new variant on a cryptographic algorithm on a blackboard in a recorded lecture broadcast over the internet — despite having nothing explicitly to do with military or intelligence applications — may expose herself to criminal liability.
Heilige Scheiße.

May 22, 2015 06:01 PM

StackOverflow

How to view akka dead letters

I've created an Actor that performs some basic operations and appears to be working correctly - however I'm seeing the following show up in my logs regularly

[INFO] [05/28/2014 14:24:00.673] [application-akka.actor.default-dispatcher-5] [akka://application/deadLetters] Message [akka.actor.Status$Failure] from Actor[akka://application/user/trigger_worker_supervisor#-2119432352] to Actor[akka://application/deadLetters] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.

I would like to actually view the contents of the Failure to establish what exactly is throwing a Failure, however I can't quite figure out how to view them.

Reading through the Akka documentation it mentions how to disable the dead-letter warning in the logs, but not how to actually write a handler to process them.

Is there a simple way to actually catch anything sent to dead-letters?

by James Davies at May 22, 2015 05:54 PM

Lobsters

StackOverflow

How can SBT generate metamodel classes from model classes using DataNucleus?

How can I generate metamodel classes (like QClient, QProduct, QInvoice) from a persistence model classes (like Client, Product, Invoice) so that a JDOQL typesafe queries can be employed?

In particular, I'm interested on generating the metamodel classes and also run bytecode enhancement on persistence classes, via SBT and using DataNucleus with JDO annotations.


This question is related to
How can I run DataNucleus Bytecode Enhancer from SBT?

by Richard Gomes at May 22, 2015 05:42 PM

NAS devices to set appropriate permission for a share drive between windows and linux

We have a environment where the application runs on the windows server and database on the linux machine. We need application and database accessible to share filesystem (kind of UTL directory in database). We use SUN ZFS appliance, where we can configure SMB and NFS on same share.

To avoid any permission issue, we have given full permission to everybody who can access the share. Since it is working, now we want to tighten the permission, by giving only access to the application and database for read, write.

What we need to change in the windows environment to map to a userid and groupid for the share?

By the way we tried mounted as NFS share in windows 2008, we had into lot of DB issues so we don't want to go that path again.

On Windows

net use Y://<zfs serverip>/utilfile \persistence:yes

On Linux

<zfs serverip>:/export/utilfile   /utilfile               nfs     rw,bg,hard,nointr,rsize=1048576,wsize=1048576,tcp,timeo=600 0 0

When application writes we see following permission in linux server

-rwxrwxrwx 1 2000000002 bin     4924 Mar 12 00:22 sdfw
-rwxrwxrwx 1 2000000002 bin       73 Mar 13 18:43 xsfw.log

by user1595858 at May 22, 2015 05:40 PM

How can I run DataNucleus Bytecode Enhancer from SBT?

I've put together a proof of concept which aims to provide a skeleton SBT multimodule project which utilizes DataNucleus JDO Enhancer with mixed Java and Scala sources.

The difficulty appears when I try to enhance persistence classes from SBT. Apparently, I'm not passing the correct classpath when calling Fork.java.fork(...) from SBT.


See also this question:
How can SBT generate metamodel classes from model classes using DataNucleus?


Exception in thread "main" java.lang.NoClassDefFoundError: Could not initialize class org.datanucleus.util.Localiser
        at org.datanucleus.metadata.MetaDataManagerImpl.loadPersistenceUnit(MetaDataManagerImpl.java:1104)
        at org.datanucleus.enhancer.DataNucleusEnhancer.getFileMetadataForInput(DataNucleusEnhancer.java:768)
        at org.datanucleus.enhancer.DataNucleusEnhancer.enhance(DataNucleusEnhancer.java:488)
        at org.datanucleus.api.jdo.JDOEnhancer.enhance(JDOEnhancer.java:125)
        at javax.jdo.Enhancer.run(Enhancer.java:196)
        at javax.jdo.Enhancer.main(Enhancer.java:130)
[info] Compiling 2 Java sources to /home/rgomes/workspace/poc-scala-datanucleus/model/target/scala-2.11/klasses...
java.lang.IllegalStateException: errno = 1
        at $54321831a5683ffa07b5$.runner(build.sbt:230)
        at $54321831a5683ffa07b5$$anonfun$model$7.apply(build.sbt:259)
        at $54321831a5683ffa07b5$$anonfun$model$7.apply(build.sbt:258)
        at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
        at sbt.$tilde$greater$$anonfun$$u2219$1.apply(TypeFunctions.scala:40)
        at sbt.std.Transform$$anon$4.work(System.scala:63)
        at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:226)
        at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:226)
        at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:17)
        at sbt.Execute.work(Execute.scala:235)
        at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:226)
        at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:226)
        at sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:159)
        at sbt.CompletionService$$anon$2.call(CompletionService.scala:28)

For the sake of completeness and information, below you can see a java command line generated by SBT which can be executed by hand on a separate window, for example. It just works fine.

$ java  -cp /home/rgomes/.ivy2/cache/org.scala-lang/scala-library/jars/scala-library-2.11.6.jar:/home/rgomes/.ivy2/cache/com.google.code.gson/gson/jars/gson-2.3.1.jar:/home/rgomes/.ivy2/cache/javax.jdo/jdo-api/jars/jdo-api-3.0.jar:/home/rgomes/.ivy2/cache/javax.transaction/transaction-api/jars/transaction-api-1.1.jar:/home/rgomes/.ivy2/cache/org.datanucleus/datanucleus-core/jars/datanucleus-core-4.0.4.jar:/home/rgomes/.ivy2/cache/org.datanucleus/datanucleus-api-jdo/jars/datanucleus-api-jdo-4.0.4.jar:/home/rgomes/.ivy2/cache/org.datanucleus/datanucleus-jdo-query/jars/datanucleus-jdo-query-4.0.4.jar:/home/rgomes/.ivy2/cache/org.datanucleus/datanucleus-rdbms/jars/datanucleus-rdbms-4.0.4.jar:/home/rgomes/.ivy2/cache/com.h2database/h2/jars/h2-1.4.185.jar:/home/rgomes/.ivy2/cache/org.postgresql/postgresql/jars/postgresql-9.4-1200-jdbc41.jar:/home/rgomes/.ivy2/cache/com.github.dblock.waffle/waffle-jna/jars/waffle-jna-1.7.jar:/home/rgomes/.ivy2/cache/net.java.dev.jna/jna/jars/jna-4.1.0.jar:/home/rgomes/.ivy2/cache/net.java.dev.jna/jna-platform/jars/jna-platform-4.1.0.jar:/home/rgomes/.ivy2/cache/org.slf4j/slf4j-simple/jars/slf4j-simple-1.7.7.jar:/home/rgomes/.ivy2/cache/org.slf4j/slf4j-api/jars/slf4j-api-1.7.7.jar:/home/rgomes/workspace/poc-scala-datanucleus/model/src/main/resources:/home/rgomes/workspace/poc-scala-datanucleus/model/target/scala-2.11/klasses javax.jdo.Enhancer -v -pu persistence-h2 -d /home/rgomes/workspace/poc-scala-datanucleus/model/target/scala-2.11/classes

May 13, 2015 3:30:07 PM org.datanucleus.enhancer.ClassEnhancerImpl save
INFO: Writing class file "/home/rgomes/workspace/poc-scala-datanucleus/model/target/scala-2.11/classes/model/AbstractModel.class" with enhanced definition
May 13, 2015 3:30:07 PM org.datanucleus.enhancer.DataNucleusEnhancer addMessage
INFO: ENHANCED (Persistable) : model.AbstractModel
May 13, 2015 3:30:07 PM org.datanucleus.enhancer.ClassEnhancerImpl save
INFO: Writing class file "/home/rgomes/workspace/poc-scala-datanucleus/model/target/scala-2.11/classes/model/Identifier.class" with enhanced definition
May 13, 2015 3:30:07 PM org.datanucleus.enhancer.DataNucleusEnhancer addMessage
INFO: ENHANCED (Persistable) : model.Identifier
May 13, 2015 3:30:07 PM org.datanucleus.enhancer.DataNucleusEnhancer addMessage
INFO: DataNucleus Enhancer completed with success for 2 classes. Timings : input=112 ms, enhance=102 ms, total=214 ms. Consult the log for full details
Enhancer Processing -v.
Enhancer adding Persistence Unit persistence-h2.
Enhancer processing output directory /home/rgomes/workspace/poc-scala-datanucleus/model/target/scala-2.11/classes.
Enhancer found JDOEnhancer of class org.datanucleus.api.jdo.JDOEnhancer.
Enhancer property key:VendorName value:DataNucleus.
Enhancer property key:VersionNumber value:4.0.4.
Enhancer property key:API value:JDO.
Enhancer enhanced 2 classes.

Below you can see some debugging information which is passed to Fork.java.fork(...):

=============================================================
mainClass=javax.jdo.Enhancer
args=-v -pu persistence-h2 -d /home/rgomes/workspace/poc-scala-datanucleus/model/target/scala-2.11/classes
javaHome=None
cwd=/home/rgomes/workspace/poc-scala-datanucleus/model/target/scala-2.11/classes
runJVMOptions=
bootJars ---------------------------------------------
/home/rgomes/.ivy2/cache/org.scala-lang/scala-library/jars/scala-library-2.11.6.jar
/home/rgomes/.ivy2/cache/com.google.code.gson/gson/jars/gson-2.3.1.jar
/home/rgomes/.ivy2/cache/javax.jdo/jdo-api/jars/jdo-api-3.0.jar
/home/rgomes/.ivy2/cache/javax.transaction/transaction-api/jars/transaction-api-1.1.jar
/home/rgomes/.ivy2/cache/org.datanucleus/datanucleus-core/jars/datanucleus-core-4.0.4.jar
/home/rgomes/.ivy2/cache/org.datanucleus/datanucleus-api-jdo/jars/datanucleus-api-jdo-4.0.4.jar
/home/rgomes/.ivy2/cache/org.datanucleus/datanucleus-jdo-query/jars/datanucleus-jdo-query-4.0.4.jar
/home/rgomes/.ivy2/cache/org.datanucleus/datanucleus-rdbms/jars/datanucleus-rdbms-4.0.4.jar
/home/rgomes/.ivy2/cache/com.h2database/h2/jars/h2-1.4.185.jar
/home/rgomes/.ivy2/cache/org.postgresql/postgresql/jars/postgresql-9.4-1200-jdbc41.jar
/home/rgomes/.ivy2/cache/com.github.dblock.waffle/waffle-jna/jars/waffle-jna-1.7.jar
/home/rgomes/.ivy2/cache/net.java.dev.jna/jna/jars/jna-4.1.0.jar
/home/rgomes/.ivy2/cache/net.java.dev.jna/jna-platform/jars/jna-platform-4.1.0.jar
/home/rgomes/.ivy2/cache/org.slf4j/slf4j-simple/jars/slf4j-simple-1.7.7.jar
/home/rgomes/.ivy2/cache/org.slf4j/slf4j-api/jars/slf4j-api-1.7.7.jar
/home/rgomes/workspace/poc-scala-datanucleus/model/src/main/resources
/home/rgomes/workspace/poc-scala-datanucleus/model/target/scala-2.11/klasses
envVars ----------------------------------------------

=============================================================

The project is available in github for your convenience at https://github.com/frgomes/poc-scala-datanucleus

Just download it and type

./sbt compile

Any help is immensely appreciated. Thanks

by Richard Gomes at May 22, 2015 05:40 PM

go block vs thread in core.async

From http://martintrojer.github.io/clojure/2013/07/07/coreasync-and-blocking-io/ :

To get a bit more concrete let's see what happens when we try to issue some HTTP GET request using core.async. Let's start with the naive solution, using blocking IO via clj-http.

(defn blocking-get [url]
  (clj-http.client/get url))


(time
   (def data
     (let [c (chan)
           res (atom [])]
       ;; fetch em all
       (doseq [i (range 10 100)]
         (go (>! c (blocking-get (format "http://fssnip.net/%d" i)))))
       ;; gather results
       (doseq [_ (range 10 100)]
         (swap! res conj (<!! c)))
       @res
       )))

Here we're trying to fetch 90 code snippets (in parallel) using go blocks (and blocking IO). This took a long time, and that's because the go block threads are "hogged" by the long running IO operations. The situation can be improved by switching the go blocks to normal threads.

(time
   (def data-thread
     (let [c (chan)
           res (atom [])]
       ;; fetch em all
       (doseq [i (range 10 100)]
         (thread (>!! c (blocking-get (format "http://fssnip.net/%d" i)))))
       ;; gather results
       (doseq [_ (range 10 100)]
         (swap! res conj (<!! c)))
       @res
       )))

What does it mean that "go block threads are hogged by the long running IO operations"?

by qed at May 22, 2015 05:35 PM

Clojure: Scala/Java interop issues for Spark Graphx

I am trying to use Spark/GraphX using Clojure & Flambo.

Here is the code I ended up with:

In the project.clj file:

(defproject spark-tests "0.1.0-SNAPSHOT"
  :description "FIXME: write description"
  :url "http://example.com/FIXME"
  :license {:name "Eclipse Public License"
            :url "http://www.eclipse.org/legal/epl-v10.html"}
  :dependencies [[org.clojure/clojure "1.6.0"]
                 [yieldbot/flambo "0.5.0"]]
  :main ^:skip-aot spark-tests.core
  :target-path "target/%s"
  :checksum :warn
  :profiles {:dev {:aot [flambo.function]}
             :uberjar {:aot :all}
             :provided {:dependencies
                        [[org.apache.spark/spark-core_2.10 "1.3.0"]
                         [org.apache.spark/spark-core_2.10 "1.2.0"]
                         [org.apache.spark/spark-graphx_2.10 "1.2.0"]]}})

And then my Clojure core.clj file:

(ns spark-tests.core  
  (:require [flambo.conf :as conf]
            [flambo.api :as f]
            [flambo.tuple :as ft])
  (:import (org.apache.spark.graphx Edge)
           (org.apache.spark.graphx.impl GraphImpl)))

(defonce c (-> (conf/spark-conf)
               (conf/master "local")
               (conf/app-name "flame_princess")))

(defonce sc (f/spark-context c))

(def users (f/parallelize sc [(ft/tuple 3 ["rxin" "student"])
                              (ft/tuple 7 ["jgonzal" "postdoc"])
                              (ft/tuple 5 ["franklin" "prof"])]))

(defn edge
  [source dest attr]
  (new Edge (long source) (long dest) attr))

(def relationships (f/parallelize sc [(edge 3 7 "collab")
                                      (edge 5 3 "advisor")]))

(def g (new GraphImpl users relationships))

When I run that code, I am getting the following error:

1. Caused by java.lang.ClassCastException
   Cannot cast org.apache.spark.api.java.JavaRDD to
   scala.reflect.ClassTag

  Class.java: 3258  java.lang.Class/cast
  Reflector.java:  427  clojure.lang.Reflector/boxArg
  Reflector.java:  460  clojure.lang.Reflector/boxArgs

Disclaimer: I have no Scala knowledge.

Then I thought that it may be because Flambo returns a JavaRDD when we use f/parallelize. Then I tried to convert the JavaRDD into a simple RDD as used in the GraphX example:

(def g (new GraphImpl (.rdd users) (.rdd relationships)))

But the I am getting the same error but for the ParallelCollectionRDD class...

From there, I am have idea of what may be causing this. The Java API for the Graph class is here, the Scala API for the same class is here.

What I am not clear about is how to effectively use that class signature in Clojure:

org.apache.spark.graphx.Graph<VD,ED>

(Graph is an abstract class, but I tried using GraphImpl in this example)

What I am trying to do is to re-create that Scala example using Clojure.

Any hints would be highly appreciated!

by Neoasimov at May 22, 2015 05:30 PM

Is it possible to create a function in Haskell which returns a list of the contructors for a data type?

Is it possible to create a function in Haskell which returns a list of the contructors for a data type?

It should work like this:

ghci> getConstructors Bool
[True, False]
ghci> getConstructors Maybe
[Nothing, Just]

by Netsu at May 22, 2015 05:26 PM

CompsciOverflow

When there's no memory, should malloc or read/write fail?

To my surprise, I recently found out that Windows would fail a large memory allocation even if little of said memory is to actually be used, e.g. even if you don't want the swap, you better not disable it. http://brandonlive.com/2010/02/21/measuring-memory-usage-in-windows-7/ (Basically, in Windows 7, Windows Task Manager, Performance, System, Commit, -- the sum of the physical memory plus the swap file is depicted, and a mere allocation of 2GB will immediately grow the figure by 2GB, unless such growth is restricted (e.g. if the page file is disabled), then the whole allocation would fail.)

I recall that it's also the case with OpenVZ virtualisation, which behaves the same way, thus being incompatible with Java, for example. However, I never really heard of anything like that in regards to the virtual memory of other operating systems, like FreeBSD, OpenBSD or non-OpenVZ Linux (which doesn't necessarily mean that they don't behave the same).

What's the history behind such behaviour, and how the popular systems generally behave in such situation? I mean, isn't virtual memory supposed to be unlimited on any system?

by cnst at May 22, 2015 05:24 PM

StackOverflow

Play Specs2: FakeRequest to WebSocket server

I have a controller with the following method

def subscribe(id: String) = WebSocket.tryAcceptWithActor[JsValue, JsValue]

I've written the following route in conf/routes:

GET     /subscribe/:id     @controllers.WsManager.subscribe(id: String)

The application works but I want to do a specific test through Specs2.

I tried to "subscribe" to websocket endpoint with:

val request = FakeRequest(GET, "/subscribe/1234")
val response = route(request)

or

val request = FakeRequest(GET, "ws://localhost:3333/subscribe/1234")
val response = route(request)

But in both case it doesn't work: I receive None as Response (response.isDefined is false). Is there a correct way to connect to WebSocket endpoint in Specs2 test?

by Mattia Micomonaco at May 22, 2015 05:19 PM

Writing a unit test for Play websockets

I am working on a Scala + Play application utilizing websockets. I have a simple web socket defined as such:

def indexWS =  WebSocket.using[String] { request =>

val out = Enumerator("Hello!")
val in = Iteratee.foreach[String](println).map { _ =>
  println("Disconnected")
}


(in,out)
}

I have verified this works using Chrome's console. The issue I'm having is trying to write a unit test for this. Currently I have this:

"send awk for websocket connection" in {
  running(FakeApplication()){
    val js = route(FakeRequest(GET,"/WS")).get

    status(js) must equalTo (OK)
    contentType(js) must beSome.which(_ == "text/javascript")
  }
}

However, when running my tests in play console, I receive this error, where line 35 corresponds to this line 'val js = route(FakeRequest(GET,"/WS")).get':

NoSuchElementException: None.get (ApplicationSpec.scala:35)

I have not been able to find a good example of unit testing scala/play websockets and am confused on how to properly write this test.

by vikash dat at May 22, 2015 05:18 PM

DataTau

CompsciOverflow

What usage is the delta defined in the polynomial hierarchy?

At the Wikipedia page, the polynomial hierarchy also defines the following:

$\Delta_0^\text{P} = P$, $\Delta_i^\text{P} = \text{P}^{\Sigma_{i-1}^\text{P}}$

However, the only usage of this anywhere else on the page is the following set of inclusions:

$\Pi_i^P \subseteq \Delta_{i+1}^P \subseteq \Pi_{i+1}^P$.

What usage does this $\Delta_i^P$ class have?

by Ryan at May 22, 2015 04:59 PM

UnixOverflow

Courier IMAP - Account's mailbox directory is not owned by the correct uid or gid

so i poked around and found out that DEFDOMAIN="@domain.se" is messing things up, so i removed that from /etc/courier/imapd and i got to the point where SMTP work and i get this from the IMAP:

Jul  2 13:23:10 HOST authdaemond: Authenticated: sysusername=anton, sysuserid=<null>, sysgroupid=20001, homedir=/storage/vmail/anton, address=anton, fullname=Anton, maildir=<null>, quota=<null>, options=<null>
Jul  2 13:23:10 HOST authdaemond: Authenticated: clearpasswd=MyPasswd, passwd=$3e$04$AC1c10x0A3etWCJFrla.Rl2sevNhq24yXYxrq8IN7mEeGI20.
Jul  2 13:23:10 HOST imapd-ssl: anton: Account's mailbox directory is not owned by the correct uid or gid

But i'm not sure why because:

# ls -l /storage/vmail/
-rw-r--r--  1 vmail  vmail   22 Mar 13 01:06 .Xdefaults
-rw-r--r--  1 vmail  vmail  773 Mar 13 01:06 .cshrc
-rw-r--r--  1 vmail  vmail  398 Mar 13 01:06 .login
-rw-r--r--  1 vmail  vmail  113 Mar 13 01:06 .mailrc
-rw-r--r--  1 vmail  vmail  218 Mar 13 01:06 .profile
drwx------  2 vmail  vmail  512 Jun 30 10:44 .ssh
drwxr-xr-x  3 anton  anton  512 Jun 30 10:44 anton

my /etc/courier/imapd says:

MAILDIRPATH=/storage/vmail

But i've also tried:

MAILDIRPATH=Maildir

And /etc/passwd states:

# cat /etc/passwd | grep anton                                                                                                                                                                                 
anton:*:20001:20001:Anton:/storage/vmail/anton:/sbin/nologin

Where am i going wrong?

by Torxed at May 22, 2015 04:57 PM

/r/netsec

StackOverflow

Scala Futures are slow with many cores

For a study project I have written a Scala application that uses a bunch of futures to do a parallel computation. I noticed that on my local machine (4 cores) the code runs faster than on the many-core server of our computer science institute (64 cores). Now I want to know why this is.

Task in Detail

The task was to create random boolean k-CNF formulas with n different variables randomly distributed over m clauses and then see how at which m/n combination the probability that a formula is solvable drops below 50% for diffent random distributions. For this I have implemented a probabilistic k-SAT algorithm, a clause generator and some other code. The core is a function that takes n and m es well as the generator function, runs 100 futures and waits for the result. The function looks like this:

Code in question

def avgNonvalidClauses(n: Int, m: Int)(implicit clauseGenerator: ClauseGenerator) = {

    val startTime = System.nanoTime

    /** how man iteration to build the average **/
    val TRIES = 100

    // do TRIES iterations in parallel 
    val tasks = for (i <- 0 until TRIES) yield future[Option[Config]] {
        val clause = clauseGenerator(m, n)
        val solution = CNFSolver.probKSat(clause)
        solution
    }

    /* wait for all threads to finish and collect the results. we will only wait
     * at most TRIES * 100ms (note: flatten filters out all
     * None's) */
    val results = awaitAll(100 * TRIES, tasks: _*).asInstanceOf[List[Option[Option[Config]]]].flatten

    val millis = Duration(System.nanoTime - startTime, NANOSECONDS).toMillis
    val avg = (results count (_.isDefined)) /  results.length.toFloat

    println(s"n=$n, m=$m => $avg ($millis ms)")

    avg
  }

Problem

On my local machine I get these results

[info] Running Main 
n=20, m=120 => 0.0 (8885 ms)
n=21, m=121 => 0.0 (9115 ms)
n=22, m=122 => 0.0 (8724 ms)
n=23, m=123 => 0.0 (8433 ms)
n=24, m=124 => 0.0 (8544 ms)
n=25, m=125 => 0.0 (8858 ms)
[success] Total time: 53 s, completed Jan 9, 2013 8:21:30 PM

On the 64-core server I get:

[info] Running Main 
n=20, m=120 => 0.0 (43200 ms)
n=21, m=121 => 0.0 (38826 ms)
n=22, m=122 => 0.0 (38728 ms)
n=23, m=123 => 0.0 (32737 ms)
n=24, m=124 => 0.0 (41196 ms)
n=25, m=125 => 0.0 (42323 ms)
[success] Total time: 245 s, completed 09.01.2013 20:28:22

However, I the full load on both machines (the server averages around at a load of 60 to 65) so there are running enough threads. Why is this? Am I doing something completely wrong?

My local machine has an "AMD Phenom(tm) II X4 955 Processor" CPU the server is uses "AMD Opteron(TM) Processor 6272". The local CPU has 6800 bogomips, the servers 4200. So, while the local CPU is a 1/3 faster, there are 12 times more cors on the server.

Additional

If have a trimmed down example of my code pushed to github so you can try for yourselve if you are intereste: https://github.com/Blattlaus/algodemo (It's an sbt project using Scala 2.10).

Updates

  1. I've eleminated any randomness by seeding the random number generators with 42. This changes nothing
  2. I've changed the testset. Now the results are even more astonishing (the server is 5 times slower!) Note: all outputs for the average percentage of not solvable clauses are zeor because of the input. This is normal and expected.
  3. Added info about CPUs
  4. I've noticed that calls to Random.nextInt() are a factor of 10 slower on the Server. I have wrapped all calls in a helper that measures the runtime a prints is to the console if they are slower then 10ms. On my local machine i get a few, and the typically are araound 10-20ms. On the server I get much mure calls and they tend to be above 100ms. Could this be the issue???

by Martin Thurau at May 22, 2015 04:54 PM

Cannot resolve Writes[T] at compile time in Play Json

I am trying to make a generic Writer to get me the String representation of a json with Play Json. What I've got till now is

import com.twitter.finatra.http.Controller
import play.api.libs.json.{Json, Writes}

trait MyController extends Controller {
  def handle(request: AnyRef) =
    response
     .ok
     .json(createJsonResponse(manage(request)))
     .toFuture

   def manage[T : Writes](request: AnyRef): T

  // There should be an implicit Writes[T] in scope
   def createJsonResponse[T : Writes](data: T) = Json.stringify(Json.toJson[T](data))
}

I have case class TotalsForResponse(issuer: String, total: Int) defined and

  object JsonFormatters {
   implicit val totalsForResponseWrites = Json.format[TotalsForResponse]
  }

This should provide me at compile with an implicit Writes[T] in scope. IN one of my controllers I have

def manage[T : Writes](request: AnyRef) = request match {

case TotalInvestorsForRequest(issuer) =>
  TotalsForResponse(issuer,
    TitleSectionDBHandler.totalInvestorsFor(issuer))
  .asInstanceOf[T]
}

which results in diverging implicit expansion for type play.api.libs.json.Writes[Nothing] at compile time. This was taken from this example which I couldn't get it to work. Any ideas?

by Tomas Duhourq at May 22, 2015 04:53 PM

/r/emacs

QuantOverflow

Differences between editions of Security Analysis by Graham and Dodd?

Where can I find a comparison of the contents, a list of everything that changed or the differences among the different editions of the book Security Analysis by Benjamin Graham & David Dodd?

There are six editions of the book: 1934, 1940, 1951, 1962, 1988, and 2008.

Do I have to read all of them and compare myself or did someone already do that?

Edit: I already read The Intelligent Investor and the sixth edition of Security Analysis. This question is about the differences between the different editions. The sixth edition if I understand it correctly, is based on the 1940-edition.

by tomsv at May 22, 2015 04:48 PM

StackOverflow

split string by char

scala has a standard way of splitting a string in StringOps.split

it's behaviour somewhat surprised me though.

To demonstrate, using the quick convenience function

def sp(str: String) = str.split('.').toList

the following expressions all evaluate to true

(sp("") == List("")) //expected
(sp(".") == List()) //I would have expected List("", "")
(sp("a.b") == List("a", "b")) //expected
(sp(".b") == List("", "b")) //expected
(sp("a.") == List("a")) //I would have expected List("a", "")
(sp("..") == List()) // I would have expected List("", "", "")
(sp(".a.") == List("", "a")) // I would have expected List("", "a", "")

so I expected that split would return an array with (the number a separator occurrences) + 1 elements, but that's clearly not the case.

It is almost the above, but remove all trailing empty strings, but that's not true for splitting the empty string.

I'm failing to identify the pattern here. What rules does StringOps.split follow?

For bonus points, is there a good way (without too much copying/string appending) to get the split I'm expecting?

by Martijn at May 22, 2015 04:41 PM

Trying to launch a process via screen from within ansible

I'm having a slightly weird, repeatable, but unexplainable problem with screen.

I'm using ansible/vagrant to build a consistent dev environment for my company, and as a slightly showy finishing touch it starts the dev server running in a screen session so the frontend devs don't need to bother logging in and manually starting the process, but backend devs can log in and take control.

However, one of the systems - despite being built from scratch - ends up with an immediately dead screen (it doesn't log anything to screenlog). Running the command manually works fine.

(the command being)

screen -L -d -m bash -c /home/vagrant/run_screen_server.sh

I've even gone to the point of nuking everything vagrant/virtualbox related on the system, making sure it's installing a clean, nightly box. Exactly the same source box works all the other machines.

Are there any other debugging steps I can be taking or is there something I'm missing?

by halfapenguin at May 22, 2015 04:35 PM

Lobsters

StackOverflow

Spark: Using named arguments to submit application

Is it possible to write a Spark script that has arguments that can referred to by name rather than index in the args() array? I have a script that has 4 required arguments and depending on the value of those, may require up to 3 additional arguments. For example, in one case args(5) might be a date I need to enter. I another, that date may end up in args(6) because of another argument I need.

Scalding has this implemented but I don;t see where Spark does.

by J Calbreath at May 22, 2015 04:23 PM

Scalaxb reads from XML, invokes "label", receives UnsupportedOperationException - why?

I'm using scalaxb to convert an instance of XML into another object as follows:

val x = xml.XML.load(inputStream)
println(x)

val ed = scalaxb.fromXML[entityDescriptor.scalaxb.EntityDescriptorType](x)
println(ed)

When it invokes fromXML, I receive the following exception:

scalaxb.ParserFailure: Error while parsing 
    urn:oasis:names:tc:SAML:2.0:protocol 
    urn:oasis:names:tc:SAML:1.1:protocol 
    urn:oasis:names:tc:SAML:1.0:protocol: 
    java.lang.UnsupportedOperationException: 
    class Group does not support method 'label'

The XML is well-formed and valid, according to Java's built-in XML Schema validator (I'm converting from a Java to a Scala project).

From my own investigation, it appears that somewhere in scalaxb, it has created an instance of scala.xml.Group, and it has invoked the label method, which for Group, has no implementation.

  1. Is this a bug, or am I doing something wrong?

  2. If it is a bug, is there a workaround?

  3. If it is not a bug, what am I doing wrong?

by drew at May 22, 2015 04:01 PM

Fefe

Erinnert ihr euch noch an die Zeiten, als die SPD als ...

Erinnert ihr euch noch an die Zeiten, als die SPD als gewerkschaftsnah galt? Die Partei des kleinen Mannes? Die eure Arbeitnehmerrechte schützt und stärkt? Die Zeiten sind vorbei. CDU und SPD haben das Gewerkschafts-Konkurrenzminimierungsgesetz erlassen. Mit namentlicher Abstimmung, d.h. wir können jetzt sehen, wer wie abgestimmt hat. (Screenshot). Es gab bei der fucking CDU mehr Gegenstimmen als bei der SPD!!

Die eine Gegenstimme bei der SPD kam von der stellvertretenden Landesvorsitzenden der Deutschen Polizeigewerkschaft Niedersachsen.

Und alle! Wer hat uns verraten? Wer hat uns verkauft?

Ich hoffe mal, dass das Verfassungsgericht das schnell wieder wegmacht.

May 22, 2015 04:01 PM

High Scalability

Stuff The Internet Says On Scalability For May 22nd, 2015

Hey, it's HighScalability time:


Where is the World Brain? San Fernando marshes in Spain (by Cristobal Serrano)
  • 569TB: 500px total data transfer per month; 82% faster: elite athletes' brains; billions and millions: Facebook's graph store read and write load; 1.3 billion: daily Pinterest spam fighting events; 1 trillion: increase in processing power performance over six decades; 5 trillion: Facebook pub-sub messages per day
  • Quotable Quotes:
    • Silicon Valley: “Tell me the truth,” Gavin demands of a staff member. “Is it Windows Vista bad? Zune bad?” “I’m sorry,” the staffer tells Gavin, “but it’s Apple Maps bad!”
    • @garybernhardt: Reminder to people whose "big data" is under a terabyte: servers with 1 TB RAM can be had about $20k. Your data set fits in RAM.
    • @epc: μServices and AWS Lambda are this year’s containers and Docker at #Gluecon
    • orasis: So by this theory the value of a tech startup is the developer's laptops and the value of a yoga studio is the loaner mats.
    • @ajclayton: An average attacker sits on your network for 229 days, collecting information. @StephenCoty #gluecon
    • @mipsytipsy: people don't *cause* problems, they trigger latent conditions that make failures more likely.  @allspaw on post mortems #srecon15europe
    • @pas256: The future of cloud infrastructure is a secure, elastically scalable, highly reliable, and continuously deployed microservices architecture
    • Kevin Marks: The Web is the network
    • @cdixon: We asked for flying cars and all we got was the entire planet communicating instantly via $34 pocket supercomputers 
    • @ajclayton: Uh oh, @pas256 just suggested that something could be called a "nanoservice"...microservices are already old. #gluecon
    • @jamesurquhart: A sign that containers are interim step? Pkging procs better than pkging servers, but not as good as pkging functs? 
    • @markburgess_osl: Let's rename "immutable infrastructure" to "prefab/disposable" infrastructure, to decouple it from the false association with functionalprog
    • @Beaker: Key to startup success: solve a problem that has been solved before but was constrained due to platform tech cost or non-automated ops scale
    • @mooreds: 10M req/month == $45 for lambda.  Cheap. -- @pas256 #gluecon
    • @ajclayton: Microservices "exist on all points of the hype cycle simultaneously" @johnsheehan #gluecon
    • @oztalip: "Treat web server as a library not as a container, start it inside your application, not the other way around!" -@starbuxman #GOTOChgo
    • @sharonclin: If a site doesn't load in 3 sec, 57% abandon, 80% never return.  @krbenedict #m6xchange #Telerik
    • QuirksMode: Tools don’t solve problems any more, they have become the problem.
    • @rzazueta: Was considering taking a shot every time I saw "Microservices" on the #gluecon hashtag. But I've already gone through two livers.
    • @MariaSallis: "If you don't invest in infrastructure, don't invest in microservices" @johnsheehan #gluecon
    • Brian Gallagher: If the world devolved into a single cloud provider, there would be no need for Cloud Foundry.
    • @b6n: startup idea: use technology from the 70s.
    • Steven Hawking: The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge
    • @aneel: "Monolithic apps have unlimited invisible internal dependencies" -@adrianco #gluecon
    • @windley: microservices don’t reduce complexity, they move it around, from dev to ops. #gluecon
    • @paulsbruce: When everyone has to be an expert in everything, that doesn't scale." @dberkholz @451research #gluecon
    • @oamike: I didn’t do SOA right, I didn’t do REST right, I’m sure as hell not going to do micro services right. #gluecon @kinlane
    • Urs Hölzle: My biggest worry is that regulation will threaten the pace of innovation.
    • @mccrory: There has been an explosion in managed OpenStack solutions - Platform9, MetaCloud, BlueBox
    • @viktorklang: Remember that you heard it here first, CPU L1 cache is the new disk.

  • This is more a measure of the fecundity of the ecosystem than an indication of disease. By its very nature the magic creation machine that it is Silicon Valley must create both wonder and bewilderment. Silicon Valley Is a Big Fat Lie: That gap between the Silicon Valley that enriches the world and the Silicon Valley that wastes itself on the trivial is widening daily.

  • In a liquidity crisis all those promises mean nothing. RadioShack Sold Your Data to Pay Off Its Debts.

  • YouTube has to work at it too. To Take On HBO And Netflix, YouTube Had To Rewire Itself: All of the things that InnerTube has enabled—faster iteration, improved user testing, mobile user analytics, smarter recommendations, and more robust search—have paid off in a big way. As of early 2015, YouTube was finally becoming a destination: On mobile, 80% of YouTube sessions currently originate from within YouTube itself.

  • If you aren't doing web stuff, do you really need to use HTTP? Do you really know why you prefer REST over RPC? There's no reason for API requests to pass through an HTTP stack.

  • If scaling is specialization and the cloud is the computer then why are we still using TCP/IP between services within a datacenter? Remote Direct Memory Access is fast. FaRM: Fast Remote Memory: FaRM’s per-machine throughput of 6.3 million operations per second is 10x that reported for Tao. FaRM’s average latency at peak throughput was 41µs which is 40–50x lower than reported Tao latencies. 

  • MigratoryData with 10 Million Concurrent Connections on a single commodity server. Lots of details on how the benchmark was run and the various configuration options. CPU usage under 50% (with spikes), memory usage was predictable, network traffic was  0.8 Gbps for 168,000 messages per second, 95th Percentile Latency: 374.90 ms. Next up? C100M.

  • Does anyone have a ProductHunt invite that they would be willing share with me?

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

by Todd Hoff at May 22, 2015 03:56 PM

Lobsters

QuantOverflow

Is the Altman Z-Score broken?

When I try to predict future returns with with the Altman Z score, it fails. That would likely not be the case if it predicted bankruptcy well (e.g. the Piotroski F score). It seems to predict neither future (2 year horizon) returns, nor future bankruptcy.

The formula I used: Altman Z-score formula

One would think that all the bankruptcies predicted in the low Altman and lagged low Altman would show up and hurt returns?

We rebalance once every two years (at the beginning of the year), and plot the number of stocks that have stopped trading while we were holding them, as well as the percentage of delisted stocks vs total positions. Note, not all delistings are due to bankruptcy -- buyouts delist, but are clearly not what we are looking for.

High Altman Z-score (rebalanced once every two years, 8% delisted, 400% return) High Altman Z score with two year timespan, plotting delisted

Low Altman Z-score (rebalanced once every two years, 10% delisted, 500% total return) Low Altman Z score with two year timespan, plotting delisted

For comparison, I do the same thing with the Piotroski score (with low being < 2 and high being >6)

High Piotroski F-score (rebalanced one every two years, 2.5% delisted, 750% return) High Piotroski -- you know the drill

Low Piotroski F-score (rebalanced one every two years, 8% delisted, 150% return) Low Piotroski F-score

So according to my results, Piotroski would be the bankruptcy predictor of choice. People use this formula works Z score still being sold by Altman at NYU. I sincerely doubt they are crazy, but I've had no luck finding the evidence or techniques they are using.

by Henry Crutcher at May 22, 2015 03:52 PM

StackOverflow

Parse JSON array using Scala Argonaut

I'm using Scala & Argonaut, trying to parse the following JSON:

[
    {
        "name": "apple",
        "type": "fruit",
        "size": 3
    },
    {
        "name": "jam",
        "type": "condiment",
        "size": 5
    },
    {
        "name": "beef",
        "type": "meat",
        "size": 1
    }
]

And struggling to work out how to iterate and extract the values into a List[MyType] where MyType will have name, type and size properties.

I will post more specific code soon (i have tried many things), but basically I'm looking to understand how the cursor works, and how to iterate through arrays etc. I have tried using \\ (downArray) to move to the head of the array, then :->- to iterate through the array, then --\ (downField) is not available (at least IntelliJ doesn't think so). So the question is how do i:

  • navigate to the array
  • iterate through the array (and know when I'm done)
  • extract string, integer etc. values for each field - jdecode[String]? as[String]?

by Gilbert at May 22, 2015 03:39 PM

/r/netsec

CompsciOverflow

About a particular use of hashing

Look at the last problem on page 2 here, http://www.cs.nyu.edu/~khot/CSCI-GA.3350-001-2014/sol3.pdf

  • All one wants to do is to convert a $x \in \{ 0,1\}^n$ into a $y \in \{0,1\}^k$ . Then just a linear transformation would have been enough. Why install this vector "b"?

  • If $a_1,x_1,x_2 \in \{0,1\}^n$ and $b_1,y_1,y_2 \in \{0,1\}$ then why is demanding simultaneous satisfaction of the equations $a_1^Tx_1 + b_1 = y_1, a_1^Tx_2 + b_1 = y_2$ the same as claiming the simultaneous satisfaction of the equations, $a_1^T(x_1 \oplus x_2) = y, b_1 = y \oplus y_1$ ?( where $y = y_1 \oplus y_2$)

by user6818 at May 22, 2015 03:22 PM

/r/emacs

UnixOverflow

How to achieve portability with sed -i (in-place editing)?

I'm writing shell scripts for my server, which is a shared hosting running FreeBSD. I also want to be able to test them locally, on my PC running Linux. Hence, I'm trying to write them in a portable way, but with sed I see no way to do that.

Part of my website uses generated static html files, and this sed line inserts correct DOCTYPE after each regeneration:

sed -i '1s/^/<!DOCTYPE html> \n/' ${file_name.html}

It works with GNU sed on Linux, but FreeBSD sed expects the first argument after -i flag to be extension for backup copy. This is how it would look like:

sed -i '' '1s/^/<!DOCTYPE html> \n/' ${file_name.html}

However, GNU sed in turn expects the expression to follow immediately after -i. (It also reqires fixes with newline handling, but that's already answered in here)

Of course I can include this change in my server copy of the script, but that would mess i.e. my use of VCS for versioning. Is there a way to achieve this with sed in a fully portable way?

by Red at May 22, 2015 03:22 PM

/r/emacs

CompsciOverflow

Regularity of unary languages with word lengths the sum of two resp. three squares

I think about unary languages $L_k$, where $L_k$ is set of all words which length is the sum of $k$ squares. Formally: $$L_k=\{a^n\mid n=\sum_{i=1}^k {n_i}^2,\;\;n_i\in\mathbb{N_0}\;(1\le i\le k)\} $$ It is easy to show that $L_1=\{a^{n^2}\mid n\in\mathbb{N_0}\}$ is not regular (e.g. with Pumping-Lemma).
Further, we know that each natural number is the sum of four squares which implies that for $k\ge 4$ all languages $L_k$ are regular since $L_k=L(a^*)$.

Now, I am interested in the cases $k=2$ and $k=3$:

$L_2=\{a^{{n_1}^2+{n_2}^2}\mid n_1,n_2\in\mathbb{N_0}\}$, $L_3=\{a^{{n_1}^2+{n_2}^2+{n_3}^2}\mid n_1,n_2,n_3\in\mathbb{N_0}\}$.

Unfortunately, I am not able to show whether this languages are regular or not (even with the help of Legendre's three-square theorem or Fermat's theorem on sums of two squares).

I am pretty sure that at least $L_2$ is not regular but unhappily thinking is not a proof. Any help?

by Danny at May 22, 2015 02:55 PM

StackOverflow

Custom Router in Playframework 2.4

I'm using Play 2.4. I'd like to replace the default router, with my own class, using the new dynamic dependency injection play feature. What are the steps to do that?

by Danix at May 22, 2015 02:52 PM

Lobsters

CompsciOverflow

Convert DFA to Regular Expression

In this old exam-task I don't understand all the steps to convert the DFA below to a Regular Expression. The $q_2$ state is eliminated first. The provided solution to eliminate $q_2$ is:

If we first eliminate $q_2$ we obtain an intermediate automata with $3$ states $(q_0$,$q_1,q_3)$ such that:

  1. We go from $q_0$ to $q_1$ with RE $a+ba$
  2. We go from $q_0$ to $q_3$ with RE $bb$
  3. We go from $q_1$ to $q_1$ with RE $ba$
  4. We go from $q_1$ to $q_3$ with RE $a+bb$

I don't understand nr2. $q_3$ can also be reached using RE $aa$. Why is this left out?

enter image description here

:

enter image description here

:

enter image description here

by user20232 at May 22, 2015 02:20 PM

StackOverflow

Extending traits that cover either one or all subtypes to handle one, some, or all subtypes

Let's say we have some kinds of widgets, and each widget can have some actions that it can perform. Some actions can be performed by all widgets, and some can be performed by only one kind of widget.

In code, that would look something like this:

trait Widget[T] { def actions: List[Action[T]] }
case class WidgetA(actions: List[Action[WidgetA]]) extends Widget[WidgetA]
case class WidgetB(actions: List[Action[WidgetB]]) extends Widget[WidgetB]
case class WidgetC(actions: List[Action[WidgetC]]) extends Widget[WidgetC]

trait Action[-T <: Widget]

So an Action[Widget] is a subtype of Action[WidgetA], and can be inserted into a WidgetA's list of actions. So this is valid:

case class UniversalAction() extends Action[Widget]
WidgetA(List(UniversalAction()))

Now, we want to extend the system such that Actions can be performed by more than one kind of Widget, but not all the kinds of Widget, i.e. an Action may be performed by WidgetA and WidgetB, but not WidgetC. So what I want is to be able to say something like the following:

case class RandomAction() extends Action[WidgetA] with Action[WidgetB]

and have it be considered as both an Action[WidgetA] and an Action[WidgetB], such that the following 2 expressions are valid:

val xs: List[Action[WidgetA]] = List(RandomAction())
val ys: List[Action[WidgetB]] = List(RandomAction())

But we can't extend Action twice, whether directly or indirectly, so what would be the right way of doing this?

Note: While it may be possible to insert traits into the hierarchy such that it covers some subset of the widgets, that quickly becomes unwieldy when new widget types come into play, hence I would like to avoid doing that.

P.S. The question title isn't the best at the moment, if anyone can suggest a more accurate and descriptive title, I'd be happy to change it.

by tempestfire2002 at May 22, 2015 02:17 PM

How to create annotations and get them in scala

I want to define some annotations and use them in Scala.

I looked into the source of Scala, found in scala.annotaion package, there are some annotations like tailrec, switch, elidable, and so on. So I defined some annotations as them do:

class A extends StaticAnnotation

@A
class X {
    @A
    def aa() {}
}

Then I write a test:

object Main {
    def main(args: Array[String]) {
        val x = new X
        println(x.getClass.getAnnotations.length)
        x.getClass.getAnnotations map { println }
    }
}

It prints some strange messages:

1
@scala.reflect.ScalaSignature(bytes=u1" !1* 1!AbCaE
9"a!Q!! 1gn!!.<b    iBPE*,7
    Ii#)1oY1mC&1'G.Y(cUGCa#=S:LGO/AA!A  1mI!)

Seems I can't get the annotation aaa.A.

How can I create annotations in Scala correctly? And how to use and get them?

by Freewind at May 22, 2015 01:58 PM

QuantOverflow

What is the fair price of this option?

Without having to use Black-Scholes, how do I price this option using a basic no-arbitrage argument?

Question

Assume zero interest rate and a stock with current price at \$$1$ that pays no dividend. When the price hits level \$$H$ ($H>1$) for the first time you can exercise the option and receive \$$1$. What is the fair price $C$ of the option today?

My thoughts so far

According to my book, the answer is $1/H$. I'm stuck on the reasoning.

Clearly I'm not going to pay more than \$$1/H$ for this option. If $C > 1/H$ then I would simply sell an option and buy $C$ shares with 0 initial investment. Then:

  • if the stock reaches $H$ I pay off the option which costs \$$1$ but I have $\$CH > 1$ worth of shares. (wohoo!)
  • if the stock does not reach $H$ I don't owe the option owner anything but I still have $CH>0$ shares. (wohoo!)

What if $C<1/H$? Then $CH<1$ and I could buy 1 option at \$$C$ by borrowing $C$ shares at \$1 each. Then:

  • if the stock reaches $H$ then I receive $1-CH > 0$ once I pay back the $C$ shares at $\$H$ each
  • but if the stock DOES NOT reach $H$, then I do not get to exercise my option and I still owe $C \times S_t $ where $S_t$ is the current price of the stock. This is where I am stuck.

by Antonius Gavin at May 22, 2015 01:55 PM

Lobsters

What I learned working with Hadoop, HBase and HyperLogLog

A handful of items I’ve learned building MapReduce jobs against an HBase database.

Comments

by ludflu at May 22, 2015 01:46 PM

QuantOverflow

Portfolio volatility

Problem True or fale? The stock of a firm has an expected return of 10%, and a volatility of 10%. The weight of the stock in a portfolio is 5%, and the correlation of the stock’s return with the portfolio is 0.5. In that case, the contribution of the stock to the volatility of the portfolio is 0.25%.

Attempt $Var(Portfolio)=Var(aX,bY)=a^2 VarX + b^2 VarY+2ab StDev(X) StDev(Y) Corr(X,Y)$

so $Var(Portfolio)_{contributedByX}=a^2 VarX + +2aStDev(X)Corr(X,Y)$

and we have Corr(X,Y)=0.5 ; VarX=10% ; a=5%.

So I get $Var(Portfolio)_{contributedByX} = 0.050025 $ so $Volatility=\sqrt{0.050025}=0.22366...$

Solution True, is the right answer. So 0.22366 must be wrong..

by jacob at May 22, 2015 01:45 PM

StackOverflow

How to only send last message to server using NetMQ/ZeroMQ?

I want to send data to a server from a client. Only the last message is important to the server. If the server comes up after a failure I only want the server to get the last message from the client.

While the server is down I want the client to keep processing and send messages or atlest put them in a queue(with the length of one message).

I try to use NetMQ/ZeroMQ for this. How can it be done?

Thanks!

by Erik Z at May 22, 2015 01:44 PM

How to Debug ClojureScript

I apologize for this seemingly stupid question, but I've been playing with ClojureScript on and off for a few weeks now, and I can't figure out this one simple question:

How do I debug ClojureScript?

So here is the problem:

  1. I write my *.cjs files
  2. I run cljsc/build ...
  3. I load up my webpage.
  4. Something bad happens.
  5. I open up the firefox console.
  6. I get a line in the generated js, which I find incomprehensible, and I have no idea which line of the original cljs file it came from.

My question:

What is the right way to develop ClojureScript applications?

PS I've looked at ClojureScriptOne -- what I don't like about it is that it strings together a bunch of technology all at once; and I'd prefer to understand how to use each individual piece on its own before chaining it all together.

I'm comfortable with ring + moustache + compojure, [mainly because I can use my standard Clojure debugging techniques] but ClojureScript is another beast.

UPDATE: Things have changed quite a bit since this question was first asked. The proper way to debug ClojureScript applications these days is to enable source maps - http://github.com/clojure/clojurescript/wiki/Source-maps

by user1383359 at May 22, 2015 01:43 PM

/r/clojure

StackOverflow

How to continue Vagrant/Ansible provision script from error?

After I provision my Vagrant... I may get errors during provision... how do I restart from error, instead of doing everything from scratch ?

vagrant destroy -f && vagrant up

And I may get an error...

PLAY RECAP ******************************************************************** 
to retry, use: --limit @/path/to/playbook.retry

And I want to just resume from where it failed... it seems it can be done by the message... use --limit.... but when I use it in the vagrant context it doesn't work..

by smorhaim at May 22, 2015 01:36 PM

/r/netsec

/r/freebsd

/r/compsci

StackOverflow

RxScala Observable never runs

With the following build.sbt:

name := "blah"

version := "1.0"

scalaVersion := "2.11.6"

libraryDependencies ++= Seq("io.reactivex" % "rxscala_2.11" % "0.24.1", "org.scalaj" %% "scalaj-http" % "1.1.4")

and this code:

import rx.lang.scala.Observable
import scala.concurrent.duration._
import scala.language.postfixOps

object Main {

  def main(args: Array[String]): Unit = {
    println("Ready?")
    val o = Observable.interval(200 millis).take(5)
    o.subscribe(n => println(s"n = ${n}"))
  }

}

When I run it, all that's printed is Ready?; I see no n = ... at all.

I run using sbt run; it's built using Scala 2.6.11 and RxScala 0.24.1, as well as sbt 0.13. An ideas?

by atc at May 22, 2015 01:24 PM

Lobsters

StackOverflow

Pivot Spark Dataframe

I am starting to use Spark Dataframes and I need to be able to pivot the data to create multiple columns out of 1 column with multiple rows. There is built in functionality for that in Scalding and I believe in Pandas in python, but I can't find anything for the new Spark Dataframe.

I assume I can write custom function of some sort that will do this but I'm not even sure how to start, especially since I am a novice with Spark. I anyone knows how to do this with built in functionality or suggestions for how to write something in Scala, it is greatly appreciated.

by J Calbreath at May 22, 2015 01:21 PM

How to specify ansible pretasks for a role?

How should one go about defining a pretask for role dependencies. I currently have an apache role that has a user variable so in my own role in <role>/meta/main.yml I do something like:

---
dependencies:
  - { role: apache, user: proxy }

The problem at this point is that I still don't have the user I specify and when the role tries to start apache server under a non existent user, I get an error.

I tried creating a task in <role>/tasks/main.yml like:

---
- user: name=proxy

But the user gets created only after running the apache task in dependencies (which is to be expected). So, is there a way to create a task that would create a user before running roles in dependencies?

by Kęstutis at May 22, 2015 01:20 PM

Scala getting field and type of field of a case class

So I'm trying to get the field and their types in a case class. At the moment I am doing it like so

typeOf[CaseClass].members.filter(!_.isMethod).foreach{
   x =>
     x.typeSignature match {
        case _:TypeOfFieldInCaseClass => do something
        case _:AnotherTypeOfFieldInCaseClass => do something
     }
}

the problem is x.typeSignature is of type reflect.runtime.universe.Type which cannot match on any of the types that reside in the case class. Is there some way to do this?

by Justin Juntang at May 22, 2015 01:14 PM

Planet Clojure

How Not to Panic While Writing a Clojure Book

I made it to that magical moment when the Clojure book I had been working on so long was published and I could actually hold it in my hand.

https://pbs.twimg.com/media/CDWsQPCUgAERViK.jpg">

It was an immense project and I am very happy that it is finally done. Since then, I met some people that are interested in writing books as well. They asked if I had any insights or tips having gone through the process as a first time author. I have collected them in this post in hopes that they will be helpful to those going through the process themselves.

The very first thing to do is to get an outline for your book.

Start with an Outline

Ideas are soft and squishy. They drift into different shapes like clouds, and can melt away just as quickly. One of the hardest things to do was trying to arrange all those ideas in my head into an initial form that would serve as the structure for the entire book. I would pick up a pen and paper, start feeling overwhelmed, and suddenly remember I had something else very pressing to do. I successfully avoided starting a book for quite a while, until one day I cornered myself. I decided that I write my book outline on a long plane flight. With salted peanuts as fuel, and nowhere to escape, I finally wrote an outline. It wasn’t perfect but it was a start and looking back and it was not too far off. Here it is in all of its original roughness.

``` Title: Living Clojure

From beginning steps to thriving in a functional world

(Each Chapter will follow quotes from Alice in Wonderland and very use ideas from some examples)

Book 1 – Beginner steps

Chapter 1 – Are you comfortable? Talk about how OO is comfortable but there is another world out there and new way of thinking functionally.

        White Rabbit goes by

Chapter 2 – Forms & Functions – Falling down the rabbit hole Chapter 3 – Functional Transformations – Growing bigger and smaller – Key to thinking functionally is about transforming data from one shape to another shape.

        Map & Reduce

Chapter 4 – Embracing side effects – Clojure is impure functional language (The rabbit’s glove) – Cover do and io stuff. Also basic stuff about

        STM atoms and agents/ Protocols

Chapter 5 – Libraries, Libraries – – how to use Leiningen

        build system. Where to find clojure libraries, how to use
        Serpents - camel-snake-kebab

Chapter 6 – core.asyc – Tea Party introduction to the core.async library Chapter 7 – Clojure web – Chesire cat – introduction to Ring, Cheshire library, ClojureScript and OM

Book 2 – From here to there – thriving in a functional world

Training plan for thriving in a functional world.

Chapter 8 – Join the community – Surround yourself with other Clojure enthusiats – Twitter clojure – Github account – Clojure mailing list – Prismatic clojure news – Meetup for local community group. If there is not one in your area. start one! – Attend a Clojure conj

Chatpter 9 – Practice and build up Like Couch to 5K 7 week training program to work up to practicing Clojure

```

Now that I had an outline. I just needed to figure out how long it would take me to write the book.

However Long You Think It Will Take – You Are Wrong

Having never written a book before, I had no idea how much work it would be. The closest thing I had to compare it to was writing a blog post. I figured writing a chapter would be roughly equivalent to writing a blog post. If I could go back in time, this is the moment where my future self would pour a glass of ice water on my past self. Writing a book is nothing like that. It is a lot of time and work. If I had to compare it now to writing blog posts, the process would be this.

- You write a blog post.
- You rewrite the blog post.
- You write a second blog post.
- You rewrite that blog post and the first one too.
- You write another blog post.
- You rewrite all three post .....

So, if you have to commit to deadlines, make sure you remember how hard it will be, and then double the number.

Speaking of deadlines, they suck, but you should have them.

Make Deadlines

Deadlines are not fun. In fact, deadlines might even be a source of potential panic. But for me, they were necessary evil. There were a few beautiful times when inspiration came knocking at my door and I couldn’t wait to start writing. But most of the time, inspiration was off somewhere else eating biscuits. The only thing that actually moved the book along was me knowing that I needed to get the next chunk done by a certain date.

I found the best thing to do was to set aside a small bit of time on a daily basis to write something.

Routine, Routine, Routine

A daily routine was the most crucial thing for me. Life is really busy with work and family. It is so easy to get overwhelmed with daily life. I decided that mornings would work best for me. So I would stumble in to my computer an hour before work, with a hot cup of tea in hand, and write something. Some days I actually did quite a bit. Other days, I would write one sentence and declare it done. But, I would always do something. Even though those small slices of time didn’t seem like a lot, they added up over the course of a week.

Another curious thing happens when you do something, even a little bit, day after day. You start to get better at it.

Writing is a Different Skill from Coding

I was used to writing code all day. I found that the code writing skills are not the same as writing about code. In fact, I found it really hard to do at the start. But, just like writing code, you get better with practice. And to get better at anything, feedback is really important.

Get and Trust Feedback

After each chapter, I would get feedback from my editor. She was awesome and provided ways for me to improve the experience for the reader. I lost track of how many times I rewrote that first chapter, but each time it would get a bit better and I would improve as well. After the book was about half done it was sent out to others for technical review. They provided feedback not only on the writing style but also the technical content, making sure that it all made sense.

The feedback loop is much slower for writing a book than writing code, but it is just as vital. The great people providing feedback are you closest partners in this. You need to trust them. Especially during the roughest time, the middle of the book.

The Middle Bit is the Hardest

I found the hardest time was about halfway through the book. The initial excitement of the new endeavor had long since worn off. It seemed like such a mountain of a task, with no end in sight. I questioned my decision to continue with it daily. My routine and deadlines kept me moving forward. But my circle of friends and family kept me sane. It was great to have an outlet, not only to vent my frustration with my slow progress, but to get kind encouragement to keep my spirits up.

During these dark days, I also ate cheese.

Celebrate Your Small Victories

At the end of every chapter or deadline I would fix myself a nice plate of cheese and crackers. You have to celebrate the small wins. Cheese is also very tasty.

When the book was finally done. I had a really tasty plate, complete with Stilton, Brie, and a dependable Cheddar. I was incredibly happy to be finished. But I knew that I definitely could have not done it without the help of others.

Thank Everyone that Supported You

Writing a book is a huge undertaking that is utterly impossible to do alone. I could have not done it without the help and support of my editor, reviewers, family, friends, as well as the entire Clojure Community. I am so thankful to all of you that helped my in this project. You are great.

So, should you go ahead and write that book?

Do It

Yes, you should write that book and share your knowledge. Don’t panic, remember to breathe, get some cheese and tea, and go for it! It will be awesome.

by Carin Meier at May 22, 2015 01:11 PM

CompsciOverflow

Converting reality to Petri net

I'm autistic and I think a bit differently. I have superior memory and visual thinking, but I have a problem with converting real thing to abstract. I'd like to model reality processes with Petri nets. But... when I think of cooking meal or using a coffee vendor machine, I see in my head vegetables, kitchen utensils, cooking books and coffee machines I have seen in my entire life, but no graphs.

I am able to draw for example binary counters, because their graphs look like real things. But I can't see Greek salad as a graph. I need some help. When I see the examples, I should learn it all very quickly, but now I have nothing to start and nobody to ask.

by złyVlojk at May 22, 2015 01:07 PM

StackOverflow

Ansible command to check the java version in different servers

I am writing a Test case using ansible.There are totally 9 servers in which I need to check whether the installed java version is 1.7.0 or not?

If it is less than 1.7.0 then test case should fail.

Can anyone help me to write this Test case as I am very new to ansible.

Thanks in advance

by suhas at May 22, 2015 01:02 PM

Execute a before/after each in a specified should block out of many in Specs2

I have a spec like the following written:

class MySpec extends Specification {

  "A" should {
    "be true" in {
      true must beEqualTo(true)
    }
  }

  "B" should {
    "be false" in {
      false must beEqualTo(false)
    } 
  }
  ...
}

How do I/Can I specify a before/after statement to execute only within the "B" block (for instance) for each test.

by Michael Kendra at May 22, 2015 12:56 PM

Clojure zippers path function not complete?

EDIT #2: This entire question and exploration were based on my missing the fundamental notion of zippers; that they represent a perspective in a datastructure from the point of view of a particular node. So a zipper is - at all times - a pair of the current node and what the rest of the tree looks like from the perspective of that node. I was originally trying to generate a whole new structure from the zipper, while the zipper itself was all I needed, all along. I'll leave this all up for posterity, in the hope that somebody else is helped by it (or so it serves as a warning to any successors!).

Original question:

I'm trying to get my head around using zippers to manipulate trees. The specific problem is that I need to generate at runtime routes between two nodes matching arbitrary criteria in an arbitrary tree.

I thought I could use the path function to get a route to a location by calling path on the current location. But the path returned seems to omit the last step(s) required to get there.

For example:

(def test-zip (vector-zip [0 [1] [2 [3 [4] 5 [6 [7] [8]]]]]))
(-> test-zip 
    down right right down 
    right down right right
    node)

gives 5, but

(-> test-zip 
    down right right down 
    right down right right
    path)

gives

[[0 [1] [2 [3 [4] 5 [6 [7] [8]]]]] [2 [3 [4] 5 [6 [7] [8]]]] [3 [4] 5 [6 [7] [8]]]]

which isn't the same location (it's missing the effect of the last three steps, down right right).

It looks like the path function only gets you to the parent location in the tree, ignoring any siblings between you and the actual location.

Am I missing the point of the path function? I'd assumed that given a tree and a path, applying the path to the tree would bring you to the original location of the path, not partly there.

UPDATE: I've used the following function definition to compile a path of nodes from a start location to an end location:

(defn lca-path [start-loc end-loc]
  (let [sczip (z/root start-loc)
        start-path (z/path start-loc)
        start-node (z/node start-loc)
        end-path (z/path end-loc)
        end-node (z/node end-loc)
        lca-path (filter (set start-path) end-path)
        lca-node [(last lca-path)]
        lca-to-start (conj (vec (drop (count lca-path) start-path)) start-node)
        lca-to-end (conj (vec (drop (count lca-path) end-path)) end-node)
        ]

    (concat (reverse lca-to-start) lca-node lca-to-end))
  )

Pretty heavily influenced by the chat with @Mark Fisher, thanks!

by Oliver Mooney at May 22, 2015 12:42 PM

Installing sbteclipse

i have problems top use sbteclipse

What I have done:

  • went to my global sbt folder.
  • created a plugins folder
  • created the file plugins.sbt with addSbtPlugin("com.typesafe.sbteclipse" % "sbteclipse-plugin" % "2.1.0")
  • went to my eclipse project and created a build.sbt file
  • it contains:

name := "foo"

version := "1.0"

scalaVersion := "2.9.2"

libraryDependencies += "net.java.dev.jna" % "jna" % "3.4.0"
  • I am selecting the project folder in my cmd. and type sbt eclipse

But I always get the following error

[error] Not a valid command: eclipse (similar: help, alias)
[error] Not a valid project ID: eclipse
[error] Expected ':'
[error] Not a valid key: eclipse (similar: deliver, licenses, clean)
[error] eclipse
[error]        ^

ps: I am using Windows. I am also using sbt 0.12

by Maik Klein at May 22, 2015 12:40 PM

Scala data reading from Amazon S3

I have been struggling reading nested folders stored on one of bucket on S3, using Scala. I wrote script with my credentials. In bucket - there are many folders. Let say one folder name is "folder1". In this folder there are many subfolders and so on. I want to get names of each subfolder(any each inside them) for folder1.

val yourAWSCredentials = new BasicAWSCredentials(AWS_ACCESS_KEY, AWS_SECRET_KEY)   
val amazonS3Client = new AmazonS3Client(yourAWSCredentials)

print(amazonS3Client.listObjects(bucketName,"folder1").getObjectSummaries())

But this returns not the structure I need. May be there is easier way to get the path?

by xavi at May 22, 2015 12:33 PM

existential types declarations in Scala

What is the difference between the following existential types declarations:

trait A[T <: A[T]]

1. def existentialErr(arg: A[T forSome{type T <: A[T]}]): Unit =()
2. def existentialOk(arg: A[T] forSome{type T <: A[T]}): Unit =()

Point 1 is generates the following compile time error:

type arguments [T forSome { type T <: packagename.A[T] }] do not conform to trait A's type parameter bounds [T <: packagename.A[T]]

Point 2 compiles without issue.

Generally this question is very similar to the following unanswered one: Confusion with existential types in Scala

Reproduced on Scala 2.11.6

UPD: Travis Brown has provided an answer here

by user2382626 at May 22, 2015 12:33 PM

Vagrant with ansible stop when meet console questions

I'm installing mongo extension for PHP in my vagrant with this task

---
- name: Intall MongoDb PHP extension
  sudo: yes
  command: "pecl install mongo"

- name: Copy mongo extension INI to mods-available folder
  template: >
    src=mongodb_extension.ini.j2
    dest={{ php_conf_dir }}/mongodb.ini
    owner=root group root mode=644
- name: Enabling mongo config in PHP cli conf
  sudo: yes
  file: src={{ php_conf_dir }}/xhprof.ini  dest=/etc/php5/cli/conf.d/20-mongodb.ini state=link

- name: Enabling xhprof config in PHP fpm conf
  sudo: yes
  file: src={{ php_conf_dir }}/xhprof.ini  dest=/etc/php5/fpm/conf.d/20-mongodb.ini state=link
  notify: reload php-fpm

The problem is it stucks at this Intall MongoDb PHP extension

I tried install mongo extension manually and see that the console asks this question Build with Cyrus SASL (MongoDB Enterprise Authentication) support? [no] :

I think the problem is this question.

Does anybody know how to answer this question in ansible, so it can run provision?

Thank you very much.

by Phạm Dương at May 22, 2015 12:20 PM

How to use Spark SQL DataFrame with flatMap?

I am using the Spark Scala API. I have a Spark SQL DataFrame (read from an Avro file) with the following schema:

root
|-- ids: array (nullable = true)
|    |-- element: map (containsNull = true)
|    |    |-- key: integer
|    |    |-- value: string (valueContainsNull = true)
|-- match: array (nullable = true)
|    |-- element: integer (containsNull = true)

Essentially 2 columns [ ids: List[Map[Int, String]], match: List[Int] ]. Sample data that looks like:

[List(Map(1 -> a), Map(2 -> b), Map(3 -> c), Map(4 -> d)),List(0, 0, 1, 0)]
[List(Map(5 -> c), Map(6 -> a), Map(7 -> e), Map(8 -> d)),List(1, 0, 1, 0)]
...

What I would like to do is flatMap() each row to produce 3 columns [id, property, match]. Using the above 2 rows as the input data we would get:

[1,a,0]
[2,b,0]
[3,c,1]
[4,d,0]
[5,c,1]
[6,a,0]
[7,e,1]
[8,d,0]
...

and then groupBy the String property (ex: a, b, ...) to produce count("property") and sum("match"):

 a    2    0
 b    1    0
 c    2    2
 d    2    0
 e    1    1

I would want to do something like:

val result = myDataFrame.select("ids","match").flatMap( 
    (row: Row) => row.getList[Map[Int,String]](1).toArray() )
result.groupBy("property").agg(Map(
    "property" -> "count",
    "match" -> "sum" ) )

The problem is that the flatMap converts DataFrame to RDD. Is there a good way to do a flatMap type operation followed by groupBy using DataFrames?

by Yuri Brovman at May 22, 2015 12:12 PM

Error dispatch-nio is not found

I m trying out the example given at http://dispatch-classic.databinder.net/Choose+an+Executor.html for dispatch-nio: Example given:

import dispatch._
val h = new nio.Http
val f = h(url("http://www.scala-lang.org/") as_str)

My code:

  import dispatch._
  val h = new nio.Http
  var host = "http://www.scala-lang.org";
    val f: Future[String] = h(url("http://www.scala-lang.org/") as_str)
    f.apply();

But it doesn't recognize nio and as_str keywords. Could anyone please suggest what would be the problem?

by Veerendra Kumar at May 22, 2015 12:03 PM

CompsciOverflow

K-nearest neighbor: double instances in training and test set [on hold]

Consider the following case:

A dataset of 1000 cases was partitioned into a training set of 600 cases and a validation set of 400 cases. A k-Nearest Neighbors model with k=1 had a misclassification error rate of 8% on the validation data. It was subsequently found that the partitioning had been done incorrectly and that 100 cases from the training data set had been accidentally duplicated and had overwritten 100 cases in the validation dataset.

Now, what is the misclassification error rate for the 300 cases that were truly part of the validation data? I've been asked this question, but really struggling to find out the answer.

by Leon-- at May 22, 2015 11:59 AM

/r/netsec

StackOverflow

Is order guaranteed when multiple processes are waiting to put data in the same channel?

Here is the code:

(ns typedclj.core
  (:require [clojure.core.async
             :as a
             :refer [>! <! >!! <!! go chan buffer close! thread
                     alts! alts!! timeout]])
  (:gen-class))


(def mychan (chan))
(go (while true
      (>! mychan "what is this?")))
(go (loop [i 0]
      (>! mychan (str "msg" i))
      (recur (inc i))))
(go (loop [i 0]
      (>! mychan (str "reply" i))
      (recur (inc i))))
(go (loop [i 0]
      (>! mychan (str "curse" i))
      (recur (inc i))))

Some experimentation in the repl suggests that the channel takes data from each of the 4 process in turn:

(<!! mychan)
=> "what is this?"
(<!! mychan)
=> "msg0"
(<!! mychan)
=> "reply0"
(<!! mychan)
=> "curse0"
(<!! mychan)
=> "what is this?"
(<!! mychan)
=> "msg1"
(<!! mychan)
=> "reply1"
(<!! mychan)
=> "curse1"
(<!! mychan)
=> "what is this?"
(<!! mychan)
=> "msg2"
(<!! mychan)
=> "reply2"
(<!! mychan)
=> "curse2"

I am wondering whether the order is always maintained, i.e. the process that started first will also feed to channel first, in each cycle.

by qed at May 22, 2015 11:50 AM

Problems when merging a sampled Observable

I'm using rx-scala, which is a subproject of rx-java. I'll be using Scala syntax and hope that everyone understands.

I'm encountering odd behavior, and I don't know whether it's a bug or misusage of rx operators on my behalf.

Problem Statement

I have an ox: Observable[X] and a trigger observable tr: Observable[()]. I want an observable oy that is a transformation of ox using function f: Function[X,Y], but only when triggered, since f is potentially expensive.

If there is no transformed value for the last value of ox, then oy should be null.

Some remarks:

  • ox is hot, as it is the result of UI events.
  • ox behaves correctly (both values and timing), as I checked with println debugging.
  • oy fires at the correct times; it's just using outdated values of ox, when its a not-null value.

Current Code

oy = ox.sample(tr).map(f).merge(ox.map(x => null))

The problem with the above code is: It works initally, but after some time, when triggering tr, oy is applying f to old values of ox. When not changing ox, if I trigger tr repeatedly, the results get newer and eventually catch up.

If I remove the merge to not reset to null, then everything works fine (probably, as the effect appears non-deterministic).

Question

My presented code is buggy.

  1. I'd like to know whether I'm doing something wrong.
  2. I welcome alternative ways of achieving what I need.

For the Jave people

  • generics/type annotation: ox: Observable[X] means Observable<X> ox
  • lambdas: x => null means x -> null

by ziggystar at May 22, 2015 11:48 AM

Fred Wilson

Followup Friday: The Results of the Apple Watch Followup Survey

Here are the results of the survey we did on AVC yesterday (click on the image to see it in a larger format). These are very good numbers for the Apple Watch.

apple watch followup survey

by Fred Wilson at May 22, 2015 11:46 AM

QuantOverflow

negative probabilities in the bivariate tree heston model

I am trying to implement the bivariate tree approach for the Heston model by Beliavea & Nawalkha.

I currently have the problem that given the specifications in their examples, I always obtain negative probability for the stock change to the middle node.

Has anyone else experienced this or can give some pointers as to why this might be happening?

by user12157 at May 22, 2015 11:43 AM

UnixOverflow

How do you install the FreeBSD10 kernel sources?

I am trying to run an update of freebsd10 and I am being asked for the kernel sources

===>>> Launching child to update lsof-4.89.b,8 to lsof-4.89.d,8

===>>> All >> lsof-4.89.b,8 (9/9)

===>>> Currently installed version: lsof-4.89.b,8
===>>> Port directory: /usr/ports/sysutils/lsof

        ===>>> This port is marked IGNORE
        ===>>> requires kernel sources


        ===>>> If you are sure you can build it, remove the
               IGNORE line in the Makefile and try again.

===>>> Update for lsof-4.89.b,8 failed
===>>> Aborting update

but sysinstall no longer exist

sysinstall: not found

What is the new method of installing the kernel sources in FreeBSD10?

I thought bsdinstall, but it only tries to chop up my disk which I do not want enter image description here

by nix at May 22, 2015 11:29 AM

StackOverflow

_each iterates only for first time in scala.html

In my .scala.html file the _each loop iterates the list only for first value

I have written as follows

 <% _.each(myList, function(b, index, list){%>
           <li><div><%=b.Name%></div></li>
  <%});%>

I have tried printing the length of my myList its gets printed as 2. but only first obj is printed.

myList is just a simple Json array

by 3_User at May 22, 2015 11:26 AM

Planet Emacsen

Irreal: Calc Quick Reference

In a comment to one of my posts on calc, Sue D. Nymme (love that handle) mentioned that he was working on a quick reference for calc and kindly provided a sneak preview. Now he reports that he's ready to release it to the world.

Check out his github repository for the project to get a copy. It's a really excellent resource that has the subject matter arranged in a more logical way than the others quick references that I've seen. You can print it (see qref-config.ps) as 1 page per sheet, a half sheet booklet, or a quarter sheet booklet depending on your needs. There are directions in the README for assembling the output into a booklet. To generate a copy, just edit qref-config.ps to choose how many pages you want on each sheet and then type make.

If you're a calc user<img src=—" class="wp-smiley" style="height: 1em; max-height: 1em;" />and you really should be<img src=—" class="wp-smiley" style="height: 1em; max-height: 1em;" />you'll definitely want to definitely check this cheat sheet out. I use mine all the time.

by jcs at May 22, 2015 11:21 AM

StackOverflow

Should I use `()` or not for the method `getClients` when the return value can be changed?

In Scala, there is a server class, which has a method, say, getClients, which returns the current connected clients.

I'm not sure how should I define it:

  1. getClients
  2. clients
  3. getClients()

The return value of this method is changed overtime since there is new client connected or disconnected from time to time.

Which one shall I choose?

by Freewind at May 22, 2015 11:09 AM

How to convert rdd object to dataframe in spark

How can I convert an RDD (org.apache.spark.rdd.RDD[org.apache.spark.sql.Row]) to a Dataframe org.apache.spark.sql.DataFrame. I converted a dataframe to rdd using .rdd. After processing it I want it back in dataframe. How can I do this ?

by user568109 at May 22, 2015 11:01 AM

How to configure HikariCP for Slick 3.0.0 RC1 on Typesafe conf

I have a Play Application based on the play-scala Typesafe template (Play Scala Seed), and tried to add Slick 3.0.0 to the project and connect to a PostgreSQL database.

First I added the dependencies to build.sbt:

libraryDependencies ++= Seq(
    "com.typesafe.slick" %% "slick" % "3.0.0-RC1",
    "postgresql" % "postgresql" % "9.1-901.jdbc4"
)

Then added the database configuration on application.conf:

brDb = {
  dataSourceClass = org.postgresql.ds.PGSimpleDataSource
  url = "jdbc:postgresql://localhost:5432/test"
  user = "postgres"
  password = "postgres"
  numThreads = 10
}

Note that I haven't disabled explicitly the pooling, so it is enabled by default, and will try to use HikariCP, because as of Slick 3.0.0 RC1, HikariCP support exists and pooling using it is enabled by default.

And in my DAO object, tried to get the database connection like this:

Database.forConfig("brDb")

When I run the app using activator run, I get this error:

RuntimeException: java.lang.NoClassDefFoundError: com/zaxxer/hikari/HikariConfig

Then I tried adding HikariCP as a dependency in build.sbt:

libraryDependencies ++= Seq(
    // ...
    "com.zaxxer" % "HikariCP" % "2.3.3",
    // ...
)

Cleaned and recompiled the app using activator clean compile, and runned it again, but I get another error:

RuntimeException: java.lang.UnsupportedClassVersionError: com/zaxxer/hikari/HikariConfig

I think I am missing some configuration, but I am not sure and have not found more info about it. How should I set up the configuration to get the connection pool working?

by Guillermo Gutiérrez at May 22, 2015 10:41 AM

Can not connect with Casbah but it works with ReactiveMongo

I have an issue connecting to my mongo database with Casbah and it works fine with ReactiveMongo. Here is the code used with Casbah: val client = MongoClient(MongoClientURI("my_uri")) and with ReactiveMongo: this.driver(actorSystem).connection(MongoConnection.parseURI("my_uri")). the error I get with with Casbah is: { "serverUsed" : "host:27017" , "ok" : 0.0 , "errmsg" : "auth failed" , "code" : 18}. Any idea where this could come from ?

by TrexXx at May 22, 2015 10:39 AM

How to load local file in sc.textFile, instead of HDFS

I'm following the great spark tutorial

so i'm trying at 46m:00s to load the README.md but fail to what i'm doing is this:

$ sudo docker run -i -t -h sandbox sequenceiq/spark:1.1.0 /etc/bootstrap.sh -bash
bash-4.1# cd /usr/local/spark-1.1.0-bin-hadoop2.4
bash-4.1# ls README.md
README.md
bash-4.1# ./bin/spark-shell
scala> val f = sc.textFile("README.md")
14/12/04 12:11:14 INFO storage.MemoryStore: ensureFreeSpace(164073) called with curMem=0, maxMem=278302556
14/12/04 12:11:14 INFO storage.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 160.2 KB, free 265.3 MB)
f: org.apache.spark.rdd.RDD[String] = README.md MappedRDD[1] at textFile at <console>:12
scala> val wc = f.flatMap(l => l.split(" ")).map(word => (word, 1)).reduceByKey(_ + _)
org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://sandbox:9000/user/root/README.md
    at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:285)

how can I load that README.md?

by Jas at May 22, 2015 10:21 AM

Context bound for nested type

Is it possible to create somehow a context bound for a nested type? Something like this:

def f[T : U[List]](a: T)

Ofc, this is not Scala syntax, but illustrates what I want to achieve, that is, get a bound on an implicit U[List[T]]. Is this possible?

Thanks.

by kaktusito at May 22, 2015 10:19 AM

CompsciOverflow

Is there a more intuitive proof of the halting problem's undecidability than diagonalization?

I understand the proof of the undecidability of the halting problem (given for example in Papadimitriou's textbook), based on diagonalization.

While the proof is convincing (I understand each step of it), it is not intuitive to me in the sense that I don't see how someone would derive it, starting from the problem alone.

In the book, the proof goes like this: "suppose $M_H$ solves the halting problem on an input $M;x$, that is, decides whether Turing machine $M$ halts for input $x$. Construct a Turing machine $D$ that takes a Turing machine $M$ as input, runs $M_H(M;M)$ and reverses the output." It then goes on to show that $D(D)$ cannot produce a satisfactory output.

It is the seemingly arbitrary construction of $D$, particularly the idea of feeding $M$ to itself, and then $D$ to itself, that I would like to have an intuition for. What led people to define those constructs and steps in the first place?

Does anyone have an explanation on how someone would reason their way into the diagonalization argument (or some other proof), if they did not know that type of argument to start with?

Addendum given the first round of answers:

So the first answers point out that proving the undecidability of the halting problem was something based on Cantor and Russell's previous work and development of the diagonalization problem, and that starting "from scratch" would simply mean having to rediscover that argument.

Fair enough. However, even if we accept the diagonalization argument as a well-understood given, I still find there is an "intuition gap" from it to the halting problem. Cantor's proof of the real numbers uncountability I actually find fairly intuitive; Russell's paradox even more so.

What I still don't see is what would motivate someone to define $D(M)$ based on $M$'s "self-application" $M;M$, and then again apply $D$ to itself. That seems to be less related to diagonalization (in the sense that Cantor's argument did not have something like it), although it obviously works well with diagonalization once you define them.

P.S.

@babou summarized what was troubling me better than myself: "The problem with many versions of the proof is that the constructions seem to be pulled from a magic hat."

by user118967 at May 22, 2015 10:17 AM

StackOverflow

Combining Futures dependent on each other

I'm using Scala to make HTTP GET requests to an API (Play Framework's WS, to be exact) which responds with a JSON response that looks like;

{
  data: [
    {text: "Hello there", id: 1},
    {text: "Hello there again", id: 2}
  ],
  next_url: 'http://request-this-for-more.com/api?page=2' //optional
}

So, the next_url field in the returned JSON may or may not be present.

What my method needs to do is start with calling the first URL, check if the response has a next_url and then do a GET on that. In the end, I should have all the data fields from the responses combined into one single future of all the data fields. I terminate when the response has no next_url present in it.

Now, doing this in a blocking way is easier, but I don't want to do that. What is the best way do tackle a problem like this?

by Ashesh at May 22, 2015 10:07 AM

/r/netsec

Planet Theory

Choongbum Lee proved the Burr-Erdős conjecture

Let H be a graph. The Ramsey number R(H) is the smallest n such that whenever you color the edges of the complete graph with n vertices with two colors blue and red, you can either find a blue copy or a red copy of H.

Ramsey’s famous theorem asserts that if H is a complete graph on m vertices then R(H) is finite.   Ir follows that R(H) is finite for every graph H and understanding the dependence of R(H) on H is a very important question. Of course there are very basic extensions: to many colors, to different requirements for different colors, and to hypergraphs.

A graph is d-degenerate if it can be reduced to the empty graph by successively deleting vertices of degree at most d. Thus, trees are 1-degenerate (and 1-degenerate graphs are forests), and planar graphs are 5-degenerate. For graphs to be degenerate is equivalent to the condition that the number of edges is at most linear times the number of vertices uniformly for all subgraphs.

In 1973, Burr and Erdős  conjectured that that for every natural number d, there exists a constant c = c(d) such that every d-degenerate graph H on n vertices satisfies r(H)\le cn.  This is a very different behavior than that of complete graphs where the dependence on the number of vertices is exponential. In 1983 Chvátal, Rödl, Szemerédi, and Trotter proved the conjecture when the maximum degree is bounded. Over the years further restricted cases of the conjectures were proved some weaker estimates were demonstrated. These developments were instrumental in the developments of some very basic tools in extremal and probabilistic combinatorics. Lee’s paper Ramsey numbers of degenerate graphs proved the conjecture!


by Gil Kalai at May 22, 2015 09:45 AM

StackOverflow

Ansible - "add_host" skipping newgroup host

Here is my code:

- hosts: Server
   tasks:
    - name: Get Slave Status
      mysql_replication: mode=getslave
      register: slaveStat

    - name: Display MasterIP
      debug: msg="{{ slaveStat.Master_Host }}"

    - name: Register MasterIP
      local_action: add_host hostname={{ slaveStat.Master_Host }} groupname=active

 - hosts: active
   tasks:
    - name: Test command 
      command: echo `df -h`

And when I ran the code it skipping the new hostgroup "active". Please help me to find out the cause of the task execution fail.

Here is the output :

#ansible-playbook  -i hosts site.yml -l 10.200.19.21 -u root -v

PLAY [controller] *************************************************************

GATHERING FACTS ***************************************************************
ok: [10.200.19.21]

TASK: [Get Slave Status] ******************************************************
ok: [10.200.19.21] => {"Connect_Retry": 60, "Exec_Master_Log_Pos": 107, "Last_Errno": 0, "Last_Error": "", "Last_IO_Errno": 0, "Last_IO_Error": "", "Last_SQL_Errno": 0, "Last_SQL_Error": "", "Master_Host": "10.200.19.22", "Master_Log_File": "mysql-bin.000017", "Master_Port": 3306, "Master_SSL_Allowed": "No", "Master_SSL_CA_File": "", "Master_SSL_CA_Path": "", "Master_SSL_Cert": "", "Master_SSL_Cipher": "", "Master_SSL_Key": "", "Master_SSL_Verify_Server_Cert": "No", "Master_Server_Id": 4886, "Master_User": "controller_repl", "Read_Master_Log_Pos": 107, "Relay_Log_File": "mysqld-relay-bin.000005", "Relay_Log_Pos": 253, "Relay_Log_Space": 556, "Relay_Master_Log_File": "mysql-bin.000017", "Replicate_Do_DB": "", "Replicate_Do_Table": "", "Replicate_Ignore_DB": "", "Replicate_Ignore_Server_Ids": "", "Replicate_Ignore_Table": "test.hoge", "Replicate_Wild_Do_Table": "", "Replicate_Wild_Ignore_Table": "", "Seconds_Behind_Master": 0, "Skip_Counter": 0, "Slave_IO_Running": "Yes", "Slave_IO_State": "Waiting for master to send event", "Slave_SQL_Running": "Yes", "Until_Condition": "None", "Until_Log_File": "", "Until_Log_Pos": 0, "changed": false}

TASK: [Display MasterIP] ******************************************************
ok: [10.200.19.21] => {
    "msg": "10.200.19.22"
}

TASK: [Register MasterIP] *****************************************************
ok: [10.200.19.21 -> 127.0.0.1] => {"new_groups": ["active"], "new_host": "10.200.19.22"}

PLAY [active] *****************************************************************
skipping: no hosts matched

PLAY RECAP ********************************************************************
10.200.19.21   

by sibaprasad Mahapatra at May 22, 2015 09:42 AM

This scala code wouldn't show my result

I just started writing Scala about a month ago. I however have been writing in Java, Javascript and some others. Please I'd need someone to tell me why this code wouldn't display the result of the move(x,y) method. Even though it builds and runs succesfully.


  class PinPoint(val xc: Int, val yc: Int){
     var x:Int = xc; var y:Int = yc

     def move(dx:Int, dy:Int){
       x = x + dx
       y = y + dy

       println("Position on horizontal axis is " + x);
       print("Position on vertical axis is " + y);
       }
   }


   object Run {
       def main(args: Array[String]) {

          val pos = new PinPoint(20,18);
          println()
          pos.move(5,7);
       }
   }

by Sigmabooma at May 22, 2015 09:40 AM

Play authentication with scala/reactivemongo

I want to implement play-2 auth with scala for authentication with reactive mongo. I tried a lot but not able to get it non-blocking through out..

This is my USER Controller

import scala.concurrent.Future
import org.mindrot.jbcrypt.BCrypt
import play.api.Logger
import play.api.libs.concurrent.Execution.Implicits.defaultContext
import play.api.libs.json.Json
import play.api.libs.json.Json.toJsFieldJsValueWrapper
import play.api.mvc.Action
import play.api.mvc.Controller
import play.modules.reactivemongo.MongoController
import play.modules.reactivemongo.json.collection.JSONCollection
import jp.t2v.lab.play2.auth.LoginLogout
import jp.t2v.lab.play2.auth.AuthConfig

object UserController extends Controller with MongoController {
  def collection: JSONCollection = db.collection[JSONCollection]("users")

  def create = Action.async(parse.json) { implicit request =>
    request.body.validate[User].map { user =>
      val pass = BCrypt.hashpw(user.password, BCrypt.gensalt())
      user.password = pass;
      collection.insert(user).map { lastError =>
        Logger.debug(s"Successfully inserted with LastError: $lastError")
        Created
      }
    }.getOrElse(Future.successful(BadRequest("invalid json")))
  }

  def list = Action.async {
    val cursor = collection.find(Json.obj()).cursor[User]
    val futureList = cursor.collect[List]()
    futureList.map { users => Ok(Json.toJson(users)) }
  }

  def findByEmail(email: String): Future[Option[User]] = {
    val futureItem: Future[Option[User]] = collection.find(Json.obj("email" -> email)).one[User]
   futureItem
  }

  def findByUsername(username: String): Future[Option[User]] = {
    val futureItem: Future[Option[User]] = collection.find(Json.obj("username" -> username)).one[User]
    futureItem
  }


}

and this is BasicAuthConfig

import scala.annotation.implicitNotFound
import scala.concurrent.ExecutionContext
import scala.concurrent.Future
import scala.reflect.ClassTag
import scala.reflect.classTag

import com.loven.web.user.UserController

import jp.t2v.lab.play2.auth.AuthConfig
import play.Logger
import play.api.mvc.RequestHeader
import play.api.mvc.Results.Forbidden

trait BaseAuthConfig extends AuthConfig {

  type Id = String
  type User = com.loven.web.user.User
  type Authority = String

  val idTag: ClassTag[Id] = classTag[Id]
  val sessionTimeoutInSeconds = 3600

  def resolveUser(username: String)(implicit ctx: ExecutionContext) = Future.successful(UserController.findByUsername(username))
  def authorizationFailed(request: RequestHeader)(implicit ctx: ExecutionContext) = throw new AssertionError("don't use")
  override def authorizationFailed(request: RequestHeader, user: User, authority: Option[Authority])(implicit ctx: ExecutionContext) = {
    Logger.info(s"authorizationFailed. username: ${user.username}, authority: $authority")
    Future.successful(Forbidden("no permission"))
  }
  def authorize(user: User, authority: Authority)(implicit ctx: ExecutionContext) = Future.successful((user.role, authority) match {
    case ("admin", _)         => true
    case ("normal", "normal") => true
    case _                    => false
  })

}

I am getting error in this line

def resolveUser(username: String)(implicit ctx: ExecutionContext) = Future.successful(UserController.findByUsername(username))

by Rahul Shukla at May 22, 2015 09:23 AM

javascript - merging functions

How can i integrate these functions into 1 single function so that i need not run them in each screens or script files of my project??

I have a function call in perspective script files.But nw i hv to integrate the function body into 1 single and put it inside a COMMON scrpit file that holds good fr all screens of my project...

appzillon.app.currencyConversion = function() {
    var curData = appzillon.data.scrdata.Deposits;
    $.each(curData, function(i, obj) {
        var Amt = Number(obj.Amount);
        obj.Amount = (Number(Amt) * 1.490);
    });
    appzillon.data.loadData(null);
};

appzillon.app.currencyConversion = function() {
    var curData = appzillon.data.scrdata.Investments;
    $.each(curData, function(i, obj) {
        var Amt = Number(obj.Amount);
        obj.Amount = (Number(Amt) * 1.490);
    });
    appzillon.data.loadData(null);
};

appzillon.app.currencyConversion = function() {
    var curData = appzillon.data.scrdata.AccountDetails;
    $.each(curData, function(i, obj) {
        var Amt = Number(obj.Balance);
        obj.Balance = (Number(Amt) * 1.490);
    });
    appzillon.data.loadData(null);
};

appzillon.app.currencyConversion = function() {
    var curData = appzillon.data.scrdata.Accounts;
    $.each(curData, function(i, obj) {
        var Amt = Number(obj.Balance);
        obj.Balance = (Number(Amt) * 1.490);
    });
    appzillon.data.loadData(null);
};

appzillon.app.currencyConversion = function() {
    var curData = appzillon.data.scrdata.AccountDetails;
    $.each(curData, function(i, obj) {
        var Amt = Number(obj.LoanAmount);
        obj.LoanAmount = (Number(Amt) * 1.490);
    });
    appzillon.data.loadData(null);
};

by kavana cs at May 22, 2015 09:18 AM

Idiomatic Scala way to come a list of Eithers into a Left or Right depending on the lists content

I have a list of Eithers

val list: List[Either[String, Int]] = List(Right(5), Left("abc"), Right(42))

As a result I want a Right if everything in the list is a Right else I want a Left. This sounds like the list should be biased (e.g. use Try instead) but let's assume it isn't or shouldn't be.

The content of the resulting Right or Left will always be the same (e.g. a string, see blow) - only the Container shall differ. E.g. with the list above we want to create a string from this list so the result should be of a Left like Left("Right(5) -> Left(abc) -> Right(42)"). If we had another Right(12) instead of the Left("abc") it should be Right("Right(5) -> Right(12) -> Right(42)").

I could manually check if there is at least one Left in the list and branch with an if to create a Left or a Right as result, but I wonder: is there a more Scala-like way to do it?

by valenterry at May 22, 2015 09:07 AM

/r/emacs

CompsciOverflow

Who invented the state elimination algorithm for converting finite automata into regular expressions?

The state elimination algorithm is an algorithm for converting finite automata into regular expressions. It's found in many textbooks, including Sipser's Introduction to the Theory of Computation. However, I can't seem to find any references to who first invented this algorithm.

Does anyone know who invented the state elimination algorithm? Ideally, I'd like a reference to a specific paper or textbook.

Thanks!

by templatetypedef at May 22, 2015 08:58 AM

QuantOverflow

Option Pricing under Jump Diffusion Models

I was wondering what the overall approach/intuition behind how to price options under Jump Diffusion Models. My understanding is under Diffusion models such as Geometric Brownian Motion (Black Sholes), allows for concept of complete markets in particular that the perfect hedging strategies thus one can replicate a call option thus leading to a pricing of the options by a no arbitrage argument. But since in Jump Diffusion models markets are incomplete, how would one approach this problem?

There may be some fundamental misunderstanding that I have with problem of pricing a derivative. So as an additional question, when someone prices a derivative what is one usual thought process in deciding a price?

by Kamster at May 22, 2015 08:52 AM

CompsciOverflow

TM recognizing $0^n1^n$ requires Ω(log n) space

I am trying to prove that any deterministic 1-tape Turing Machine which recognizes the language $L = \lbrace{0^n1^n | n \geq 0 \rbrace}$ requires $\Omega(\text{log }n)$ space.

I believe this can be done using a crossing sequence argument. I have been trying to imitate the $DSPACE(O(1)) = REG$ proof from wikipedia.

What I have tried is:

Suppose $L \in DSPACE(S(n))$, for some $S(n) = o(\text{log } n)$ and let $M$ be an $S(n)$ space bounded TM recognizing $L$. Since $L$ is not regular, $L \notin DSPACE(O(1))$. Therefore, given $k \in \mathbb{N}$, let $x \in L$ be a string of minimal length that requires more than $k$ worktape cells.

Let $C$ be the set of configurations of $M$ on $x$. That is, $C$ is the set of tuples of the form

(state, work tape head position, work tape contents).

Then $|C| \leq |Q_M| \times S(n) \times 2^{S(n)} \leq 2^{cS(n)} = o(n)$, where $c$ is some suitable constant.

The crossing sequence at $i^{\text{th}}$ cell boundary is the sequence of such configurations occurring as the input head moves across that boundary. Each term of a crossing sequence can be any of the $|C|$ elements from $C$.

Also, length of any crossing sequence cannot be more than $|C|$; for otherwise, some configuration will repeat and $M$ will enter into an infinte loop.

Therefore, number of crossing sequences of $M$ on $x$ $\leq |C|^{|C|} \leq 2^{cS(n)2^{cS(n)}} $.

The problem is that this doesn't give the required bound. So, a cleverer argumet is needed.

by rookie at May 22, 2015 08:44 AM

QuantOverflow

How to create a basket of currency pairs with the lowest correlation in R?

My strategy is designed to buy and sell all assets of a universe and rebalance periodically. It goes either long or short. To limit risk exposure to a single currency I would like the assets in the universe to have low relashionship and low price correlation between them (positive or even negative as it can go short).

For example, if the universe has 3 currency pairs A, B and C, my objective is to have price correlation c1 (between A and B), c2 (between B and C) and c3 (between A and C) as low as possible. Let's say c0 is the average of c1, c2, and c3, it will give information on the "global" correlation between pairs A, B and C.

Now imagine we have 26 currency pairs (from A to Z) to consider for creating a small universe of 5 of them (3 was for the example).

The method I apply is to create every possible combination of groups of 5 currency pairs and then it calculates c0 correl$corr of every single group. I also calculate standard deviation of c1, c2 and c3 correl$stddev as it will filter out groups with low c0 and high c1, c2 and c3. Finally I sum the global correlation and the stddev in order to rank groups with a single value correl$"corr+stddev.

In this example 16 pairs are candidates.

My dataset is an xts object with historical prices of the 16 currency pairs. I could extend it to ~50 to improve the selection as I guess bigger is the pool and better it must be.

This is how I proceed to achieve this :

# Put all symbols name in a list 
pairs <- names(mydata)
# Create groups of 5 currency pairs with no duplication
group <- combn( pairs , 5 )
# Calculate number of groups
nb_group <- ncol(group)
# Create empty object to store my result
result <- NULL
# For every groups
for (i in 1:nb_group) {

  # Calculate the mean correlation for the group
  correl <- round(mean(cor(mydata[, group[,i]])),3)
  # Transform as data frame and give a name
  correl <- as.data.frame(correl)
  colnames(correl) <- "cor"
  rownames(correl) <- toString(group[,i])
  # Calculate stddev and the sum of correlation and stddev
  correl$stddev <- round(sd(cor(mydata[, group[,i]])),3)
      correl$"cor+sddev" <- correl$cor + correl$stddev
  # export data
  result <- rbind(correl, result)
}

# Basket of currency pairs with the lowest correlation and stddev
head(result[order(result[,3]),])

This return something like :

> head(result[order(result[,3]),])
                                          cor stddev      cor+sddev
GBPUSD, USDCAD, USDRUB, USDTRY, NZDUSD  0.032  0.583          0.615
GBPUSD, USDCHF, USDCAD, USDRUB, USDTRY  0.048  0.569          0.617
GBPUSD, USDJPY, USDRUB, USDTRY, NZDUSD  0.052  0.576          0.628
GBPUSD, USDCAD, EURCHF, USDRUB, USDTRY  0.048  0.582          0.630
GBPUSD, USDCAD, USDRUB, USDMXN, NZDUSD  0.065  0.566          0.631
GBPUSD, USDCHF, USDCAD, USDRUB, USDMXN  0.097  0.536          0.633

Result is same when I average correlation and stddev (instead of the sum)

Do you think R could help to achieve this and is there a more efficient approach to create such basket of currency pairs ?

I've checked portfolio optimization packages like tawny, PortfolioAnalytics and fPortfolio but unfortunatly I'm not familiar with financial formulas and I got lost.

Thank you, Florent

by Florent at May 22, 2015 08:37 AM

CompsciOverflow

Optimizing iteration over all permutations of a bit array

edited for clarity:

I have two functions–$f(x)$ which returns an integer and $T(x)$ which returns a boolean–that operate on a bit array of length $n$. I am trying to maximize $f(x)$ over all $x$ which satisfy the condition $T(x) = true$. I also know that if $b$ is a binary subset of $a$ then $f(a) > f(b)$ (e.g. $f(0b1101) > f(0b1001)$. I have no such insight about $T$. Can I avoid iterating over candidates which are subsets of inputs which satisfy $T$?

My current solution is this: beginning with $x = 1^n$ and an empty array as a cache, iterate through the various permutations in order of descending population count. Test if the next input $x$ is a subset of any values in the cache. If it is not a subset and $T(x)=true$, then compute $f(x)$ and add $x$ to the cache of supersets.

However, this strategy must search through the cache on every iteration. Instead, I am imagining a tree which (for $n=5$) looks like $$0b11111$$ $$0b11110\qquad0b11101\qquad0b11011\qquad0b10111\qquad0b01111$$ $$...$$ The root node is $1^n$, and child nodes are created by changing a single $1$ to a $0$ so that all children are binary subsets of their parents. Iterating from the root node, when we reach a node $x$ for which $T$ is true, we could skip calculating $f$ or $T$ for all of its child nodes because they are binary subsets of $x$. Unfortunately, the tree as constructed has $n!$ nodes instead of $2^n$, and nodes begin to appear multiple times in each sub-tree, so pruning the tree is not so simple. Is there a way to realize this type of optimization?

by Dylan MacKenzie at May 22, 2015 08:34 AM

StackOverflow

Calling overloaded scala case apply from java using java generic

There are several methods I need to write in Java, most of the system is in scala. Interop is not much of an issue except for one case.

I have a java method which takes a generic T in the call to a method, the generic being passed in is actually a scala case class. The challenge I am having is calling the overloaded apply method of the case class from within the java code. It cannot resolve the method apply. I have tried several variations of an extends definition for T but none seem to solve the problem.

Any suggestions? Here is a simplified version of the code so far:

Java (method in question)

public <T> Option<T> getResult(String name) {
    Iterator<Item> items = repo.results(name).iterator();

    if (items.hasNext()) {
        T model = T.apply(items.next());  // ?? how to call overloaded case class apply method
        return new Some<>(model);
    }

    return scala.Option$.MODULE$.apply(null);
}

Scala case class (with overloaded apply method), one of many possible case classes.

case class Person (id: Int, name: String)

object Person {
  def apply(item: Item) = toObject(item)

  def toObject(item: Item): Person = {
    Person(
      item.getProperty[Int]("id"),
      item.getProperty("name")
    )
  }
}

by IUnknown at May 22, 2015 07:53 AM

Split play project into parts

I am having some play framework web project that consist of 3 logical parts: an userweb , an admin area and akka actors.

It's slowly growing up and I need to restart the production server for each small change. That's why I decided to split the project into 3 parts. The admin area communicates with the DB and the actors, so does the UI , actors communicate with the web part and the DB. And each part can be more or less painlessly restarted without restarting the other. But I don't want to split projects and only produce different JARS from one code base. Is it possible?

And another question is how to start akka actors standalone in Play framework environment?

by Oleg Golovanov at May 22, 2015 07:48 AM

Can it possible to Subforms for a form in the Orbeon Form Builder?

Till now I have created a simple forms in the form builder. But Now I need a create a form which may need a kind of sub form in it. Let me explain my scenario first. I have a form (Users) where I can fill all the user information. Now I want to add the activities/Events list for the respective user. Here Events is kind of form with 5 or 6 fields. Each user will have multiple events. How can I add events sub form for the users form?

1) Can it possible to add a sub form (Events) for the Users forms? Or Can it possible to add a separate section for Events in the User form and able to add multiple events (Dynamic) for that respective user?

Please suggest some solution for my issue.

by manjunath ramigani at May 22, 2015 07:45 AM

Ansible file copy with sudo fails after upgrading to 1.9

In a playbook, I copy files using sudo. It used to work... Until we migrated to Ansible 1.9... Since then, it fails with the following error message:

"ssh connection closed waiting for sudo password prompt"

I provide the ssh and sudo passwords (through the Ansible prompt), and all the other commands running through sudo are successful (only the file copy and template fail).

My command is:

ansible-playbook -k --ask-become-pass --limit=testhost -C -D playbooks/debug.yml

and the playbookd contains:

- hosts: designsync

  gather_facts: yes 

  tasks:
    - name: Make sure the syncmgr home folder exists
       action: file path=/home/syncmgr owner=syncmgr group=syncmgr mode=0755 state=directory
      sudo: yes
      sudo_user: syncmgr

    - name: Copy .cshrc file
      action: copy src=roles/designsync/files/syncmgr.cshrc dest=/home/syncmgr/.cshrc owner=syncmgr group=syncmgr mode=0755
      sudo: yes
      sudo_user: syncmgr

Is this a bug or did I miss something?

François.

by francois at May 22, 2015 07:34 AM

/r/emacs

Writing own major mode: how can i specify a different string start and string end character?

I'm writing a major mode where I can have multiline strings like this:

>abcde fgh ijklmonp< 

where '>' and '<' indicate the respective start and end of the string. The follwoing syntax table entries only mark >...> and <...< strings, which is not what I want.

(modify-syntax-entry ?> "\"" st) (modify-syntax-entry ?< "\"" st) 

How can I achieve this?

submitted by MartenBE
[link] [3 comments]

May 22, 2015 07:24 AM

StackOverflow

Slick select row by id

Selecting a single row by id should be a simple thing to do, yet I'm having a bit of trouble figuring out how to map this to my object.

I found this question which is looking for the same thing but the answer given does not work for me.

Currently I have this that is working, but it doesn't seem as elegant as it should be.

def getSingle(id: Long):Option[Category] = withSession{implicit session =>
 (for{cat <- Category if cat.id === id} yield cat ).list.headOption
 //remove the .list.headOption and the function will return a WrappingQuery
}

I feel getting a list then taking headOption is just bulky and unnecessary. I must be missing something.

If it helps, here is more of my Category code

case class Category(
  id: Long = 0L,
  name: String
)
object Category extends Table[Category]("categories"){

  def name = column[String]("name", O.NotNull)
  def id = column[Long]("id", O.PrimaryKey, O.AutoInc)

  def * = id ~ name <> (Category.apply _, Category.unapply _)

  ...
}

Is there an easier way to just get an Option[T] from an ID using Slick?

Solution There was a driver issue. I couldn't use .firstOption but upgraded to mysql jdbc 5.1.25 and all is well!

by kingdamian42 at May 22, 2015 07:18 AM

Join string elements from set literal in Clojure

Newish Clojure developer with a Python and C# background. I have something similar to:

(def coll #{
  :key1 ["string1"]
  :key2 ["string2"]})

I need to define a new string that concats the values of the two key vectors. I have tried

(clojure.string/join (get coll :key1 :key2))
(concat (get coll :key1 :key2))

And while these pull the string value of the first key, I can't get the second.

What is the idiomatic Clojure way to get and concat two values from a set? My desired output is:

"string1string2"

by damienstanton at May 22, 2015 07:08 AM

TheoryOverflow

Runtime of Tucker's algorithm for generating a Eulerian circuit

What is the time complexity of Tucker's algorithm for generating a Eulerian circuit? The Tucker's algorithm takes as input a connected graph whose vertices are all of even degree, constructs an arbitrary 2-regular detachment of the graph and then generates a Eulerian circuit.

I couldn't find any reference that says, for example, how the algorithm constructs an arbitrary 2-regular detachment of the graph, what data structures it uses, what the time complexity is, and so on. Any ideas?

by Ernst Stavro at May 22, 2015 07:01 AM

CompsciOverflow

Remove Leftover Thunderbird Notification Popup from Screen (Windows 8 Os) [migrated]

I hope this question is within the scope of this stack exchange community. Thunderbird has a popup notification for new mail(see image), and for some reason, it got stuck in my screen. I was wondering if there is any way to remove it from the screen without having to restart my system. This is nothing serious, but I am just really curious.

OS: Windows 8.

P.S. Tried restarting the explorer, didn't work.

Thunderbird Leftover Notification.

Thank You.

by Pazza22 at May 22, 2015 06:51 AM

QuantOverflow

Negative high frequency intraday volatility - Zhou estimator

To estimate high frequency tick data stock intraday volatility, I have read Robert Almgren's notes7.pdf

http://www.cims.nyu.edu/~almgren/timeseries/notes7.pdf

where he talks about the bias free estimator by Zhou:

$Z = \sum ((y_j - y_{j-1})^2 + 2(y_j - y_{j-1})(y_{j+1} - y_j))$

where $y_j$ is the log return of the price at time $j$.

However, this expression sometimes yields negative volatility. We see that the first term is a square which is always positive, but the second term $2(y_j-y_{j-1})(y_{j+1}-y_j)$ can be negative. So how do I treat this estimator? I want a positive volatility, not a negative! I am thinking of just applying absolute value on the second term, but that does not sound right. Any suggestions?

by Orvar Korvar at May 22, 2015 06:42 AM

StackOverflow

play.PlayExceptions$TemplateCompilationException: Compilation error[Not parsed?]

i am facing the error when i run my play application the error is pointed in my abc.scala.html file at line

 <% _.each(myList, function(myList, index, list){%>

the error is "Not parsed?"

here myList is json array

i tried commenting the lines but then the error points to some another line containing myList

by 3_User at May 22, 2015 06:22 AM

CompsciOverflow

Is the minimal number of colors needed to color a graph some fixed number?

Consider to following decision problem:

Input: Undirected graph $G=(V,E)$

Question: Is the minimum numbers of colors needed to color the vertices (such that every two adjacent vertices aren't colored the same) exactly 2015?

Note that 2015 could be any fixed number. This number is not part of the input.

I need to understand what is the minimal complexity class that contain this problem, between $\sf P,NP,coNP,NP\cup coNP,\Sigma _{2} \cap \Pi _{2}, \Sigma _{2} \cup \Pi _{2}$.

I don't it's $\sf NP$ or $\sf coNP$ since I can't think of a verifier for both the language or it's complement. The problem is maybe in $\sf \Sigma _{2} \cap \Pi {2}$ since we can think of it as

exists $f$ some 2009-coloring function, for-all $g$, 2008-coloring function, $f$ is a correct coloring and $g$ isn't a correct coloring.

So by the above, it seems that the language is in $\sf \Sigma _{2}$, but the quantifiers order doesn't really matter here, so the language is also in $\sf \Pi _{2}$, I think.

And maybe the there is some deterministic polynomial-time algorithm?

by Astro Nauft at May 22, 2015 06:19 AM

StackOverflow

Is there a way to chain two arbitrary specs2 tests (in Scala)?

Every now and then I run into a situation where I need to make absolutely sure that one test executes (successfully) before another one.

For example:

"The SecurityManager" should {
    "make sure an administrative user exists" in new WithApplication with GPAuthenticationTestUtility {
        checkPrerequisiteAccounts(prerequisiteAccounts)
    }

    "get an account id from a token" in new WithApplication with GPAuthenticationTestUtility {
        val token = authenticate(prerequisiteAccounts.head)

        token.isSuccess must beTrue

        myId = GPSecurityController.getAccountId(token.get)

        myId != None must beTrue
        myId.get.toInt > 0 must beTrue
    }

The first test will create the admin user if it doesn't exist. The second test uses that account to perform a test.

I am aware I can do a Before/After treatment in specs2 (though I've never done one). But I really don't want checkPrerequisiteAccounts to run before every test, just before that first test executes... sort of a "before you start doing anything at all, do this one thing..."

Anyone know if there is a way to tag a particular test as "do first" or "do before anything else?"

by Zac at May 22, 2015 06:18 AM

how do i mock a json post request in ring?

I'm using peridot - https://github.com/xeqi/peridot to test my ring application, and its working fine until I try to mock a post request with json data:

(require '[cheshire.core :as json])
(use 'compojure.core)

(defn json-post [req]
  (if (:body req)
    (json/parse-string (slurp (:body req)))))

(defroutes all-routes
  (POST "/test/json" req  (json-response (json-post req))))

(def app (compojure.handler/site all-routes))

(use 'peridot.core)

(-> (session app)
    (request "/test/json"
             :request-method :post
             :body (java.io.ByteArrayInputStream. (.getBytes "hello" "UTF-8")))

gives IOException: stream closed.

Is there a better way to do this?

by zcaudate at May 22, 2015 06:05 AM

Mixins with Akka

I have an actor to which I want inject dependency using mixin. Code:

trait ProductsAware {
   def getProducts: List[Product]
}

trait MyActor extends Actor with ProductsAware {
   val products = getProducts  
...
}

As you can see I'm just trying to decouple MyActor from concrete instance of ProductsAware trait, and provide concrete instance in other place (when creating actor).

And this is concrete implementation of ProductsAware trait:

trait ProductsAwareFirstImpl {
  override def getProducts = {List(new Product())}
}

And I want to create new MyActor and inject to MyActor this concrete implementation ProductsAwareFirstImpl:

system.actorOf(Props[MyActor])

The problem is that is not safe at compile time, i.e. anyone can forget to mix the ProductsAwareFirstImplto MyActor

by MyTitle at May 22, 2015 05:52 AM

How do I append to a file in Scala?

I would like to write a method similar to the following

def appendFile(fileName: String, line: String) = {
}

But I'm not sure how to flesh out the implementation. Another question on here alludes to Scala 2.9 capabilities but I could not find further details.

by deltanovember at May 22, 2015 05:39 AM

UnixOverflow

What to do when "nice" isn't good enough (FreeBSD)

I've been using x265 to encode some video on my workstation lately, but I have a problem: even though I launch it using nice -n 20 x265 to deprioritize it, it still slows the computer to a crawl while it's running. Everything still works, it's just... slow! I even see delays before the characters appear while typing in a terminal.

Do I have to live with this, or are there some other things I can try?

by Brandon Thomson at May 22, 2015 05:01 AM

StackOverflow

ReactiveMongo Extensions: Bulk update using reactive mongo extensions

Is there any way to update bulk records. I am trying to update user object using following code:

.update($doc("_id" $in (usersIds: _*)), users, GetLastError(), false , true)

In above code i am passing, users List. in user list i also add new properties and chage existing properties state, but with this statement the records are not update

If i am using following code:

.update($doc("_id" $in (usersIds: _*)), $set("inviteStatus" $eq "Invited"), GetLastError(), false , true)

The record updated successfully.

by Harmeet Singh Taara at May 22, 2015 05:01 AM

Fefe

Don Alphonso hat einen lesenswerten Artikel über das ...

Don Alphonso hat einen lesenswerten Artikel über das Westfalenblatt und einen aktuellen Shitstorm geschrieben. Im Kern geht es darum, dass ein Mann an das Westfalenblatt einen Brief geschrieben hat, an so eine Lebenshilfe-Kolumistin. Der Mann hat zwei Töchter, sechs und acht Jahre alt. Sein Bruder ist schwul und heiratet seinen Partner und bat darum, die Töchter als Blumenkinder zu schicken. Der Mann hat damit ein Problem und bittet um Rat.

Die Kolumnistin bekräftigt seine Befürchtungen und rät dem Mann, dem Bruder abzusagen. Hier kann man sich angucken, wie das in der Zeitung aussah.

Ihr könnt euch wahrscheinlich schon denken, wo hier der Stein des Anstoßes gewesen sein könnte.

Ich für meinen Teil bin ein bisschen irritiert, wieso man die Kinder nicht schicken kann. Wir reden hier von einer Hochzeit, nicht von einer Orgie. Was soll denn da bitte groß passieren, das Kinder verwirren könnte? Kinder kommen ja nicht bigott auf die Welt, das müssen die Eltern in jahrelanger Kleinarbeit anerziehen. Die Kinder hätten da sicher noch am wenigsten Stress mit. Aber hey, nicht meine Kinder, nicht meine Entscheidung. Die Kernaussage der Kolumnistin ist übrigens auch nicht "SCHÜTZT DIE KINDER VOR DEN SCHWULEN!!1!" sondern

Sagen Sie Ihrem Bruder ehrlich, wie Sie denken, und dass Ihre Kinder an der Feier nicht teilnehmen, weil Sie und Ihre Frau nicht möchten, dass die Kinder verwirrt werden.
Und das kann ich als Empfehlung voll unterschreiben. Solche Sachen muss man ehrlich ansprechen und dann ausdiskutieren. Wenn dieser Typ so ein Problem mit der Sexualität seines Bruders hat, dass er eine Zeitung anschreibt, dann wird man da wohl auch mit liberalen Parolen nicht weit kommen.

Als nächstes passierte, was in solchen Situationen ja häufiger passiert. Wo die Sache aber echt ekelig wird, ist als dann die Nummer ausgerechnet von den Grünen instrumentalisiert wird. Ja, diese Grünen hier.

Am Ende hat die Zeitung dann die Kolumnistin rausgeworfen.

Soweit sind wir in diesem Lande. Einzelne Personen können heutzutage missliebige Andersdenkende in Zeitungen einfach wegshitstormen.

Aber dass das ausgerechnet durch so Wahlkampf-Bullshit ausgelöst wird, weil die Grünen gerade einen fetten Kindesmissbrauchs-Skandal am Bein haben und den gerne durch gezieltes Auf-Andere-Zeigen loswerden wollen, das macht mich echt nur noch wütend. Herr Beck, Sie sollten sich was schämen. Und zu Fragen von Homosexualität und Kindesmissbrauch will ich den Grünen mal dringend anraten, fortan die Schnauze zu halten, spätestens seit diesem Satz aus dem oben verlinkten Artikel über den Pädophilie-Bereich:

„Es ist schwer auszuhalten“, sagte Birk, „aber es gab Täter in den Reihen der Grünen“. Sie seien sowohl in die Partei hinein als auch zu gesellschaftlichen Gruppen gut vernetzt gewesen. „Wir hatten damit bis Mitte der 1990er Jahre zu tun. Die Schwulen-AG unserer Partei war bis 1993 mehr oder minder ein Pädo-Bereich“, sagte Birk.
Wenn hier also Eltern Sorgen haben, ihre Kinder auf eine Veranstaltung von Schwulen zu schicken, dann sollten die Grünen aber mal GANZ muchsmäuschenstill sein. Denn ihr eigenes Verhalten könnte für solche Befürchtungen gut als rationale Begründung herhalten.

Im Übrigen sei noch darauf hingewiesen, dass der ursprüngliche Tweet natürlich inhaltlich falsch ist, auf mich gar wie eine gezielte Falschdarstellung wirkt. Aber das ist ja heutzutage so Usus bei Twitter. Das merken die Leute schon gar nicht mehr. Echt ekelhaft, sowas.

May 22, 2015 05:01 AM

QuantOverflow

ETNs as bank funding

I've just read the article in the link below and would like to know if someone can elaborate on a statement. I have added the whole paragraph, but highlighted the part about the use of ETNs as cheap funding. How does banks use ETNs as funding?

I live in Denmark where the only ETF-like offerings are ETNs and I'm trying to figure out why none of the banks are creating ETFs.

The investment banks take advantage of their superior sophistication. From the get-go, the ETN is a fantastic deal for banks. It's in the DNA of the product; once held, an ETN almost can't help but be fabulously profitable to its issuer. Why? They're dirt-cheap to run because the fixed costs are already borne by infrastructure set up for structured products desks. They're an extremely cheap source of funding, the life blood of the modern bank. More important, this funding becomes more valuable the bleaker an investment bank's health. As a cherry on top, investors pay hefty fees for the privilege of offering this benefit. This isn't enough for some issuers. They've inserted egregious features in the terms of many ETNs. The worst we've identified so far is a fee calculation that secretly shifts even more risk to the investor, earning banks fatter margins when their ETNs suddenly drop in value.

The article is from Morningstar: Exchange-Traded notes are worse than you think

by KERO at May 22, 2015 04:41 AM

StackOverflow

How does binary encoding in HBase work?

I'm saving breeze SparseVectors to HBase using com.twitter.chill.KryoInjection for serialization to byte array which seemed to work fine. But then I recognized that after reading the vectors back out of HBase some values are different/missing. Now I'm wondering how HBase encodes data and where a mutation of the data could appear (saving/encoding/perhaps compressing data/reading??).

I wanted to compare the vectors stored in HBase with the correlating vectors right before saving to HBase to see if they are equal (then likely reading would be the problem), but I ran into the problem how to do this. The representation of a vector in HBase shell looks like

column=d:vector, timestamp=1431936909897, value=\x01\x00breeze.linalg.SparseVector$mcD$s\xF0\x01\x00\x01\x01breeze.collection.mutable.SparseArra\xF9\x01\x1A\x01\x02[\xC4\x01\x0 E?\xF0\x00\x00\x00\x00\x00\x00?\xC5-\xF2\x15\x85Z:?\xD6,{ci\xA8\x08@\x06P\xE3\x85\xACy'?\xEB\xA2\x09\xAA\xA3\xAD\x19?\xE4M\xCB\x98\xB8\x00f?\xE8\x00\x00\x00\x00\x00\x00@"\xA4Z\ x1C\xAC\x081?\xEB\xB0\xE3\xCD\x9AR&?\xE4\xB7\xF7K`\xDD)?\xEA\xD3\xC0\x06\x14\xEC\xF7?\xF3\x01]\xE8R46?\xC45\x03\x97\xE5\x0E\x8D\x0A\x00\x00\x00\x00\x00\x00\x00\x00\x01\x0E\x02\ x0A0~\xB2\x01\xCC\x01\xBA\x02\xD22\xE4a\xDA\xB6\x0A\xD0\x8B&\xC0\xC0)\xDA\xCC\x05\x01\xC0\x84=\x01\x03breeze.storage.Zero$DoubleZero\xA4\x01\x01\x03\x06

How can I compare this to the "normal" byte code I get when serializing a vector to a text file? Did anyone already have a similar issue and can give advice?

by nadinef1288 at May 22, 2015 04:29 AM

[Recommendations]: Migrating one Big Java Application to Akka [on hold]

My apologies for this being non-programming question. I wanted to discuss some ideas and thought community here could help me

I have been reading a lot about akka lately (with one weekend playing). The ability to supervise and recover from crash is exactly what I need for my current project.

About current project

  • It is written in Java.
  • It is deployed on client’s machine.
  • This has one project that does many things - processing log files, responds to web request, run scheduled jobs amongst other things.
  • This project uses executors when possible to spawn threads as needed to parallelize the effort.

Why I think it is better candidate for akka

  • As deployed on client’s box it’s not trivial to know if its is up and running. Supervisor hierarchy would help the project to recover from errors if possible, or else inform us (in some way, messaging, email)
  • Better Monitoring, with different monitoring actors, better hierarchies and separation of concerns
  • Log Processing with akka-streams and back-pressure (I realized a recent customer issue could have been avoided if we had akka-streams in place) Help thinking in actors and staying away from synchronize and locks.

Recommendation Needed

  • There are many parts where I believe that re-writing in akka would be useful and better but its better to take baby steps at a time as we are still learning. Also, we (team of 2 are planning to pick Scala) plan to create a scaffolding around current project.
  • The scaffolding is Parent/ApplicationActor which calls our current project We plan to achieve to this one level of abstraction so as to get flexibility for doing more things with Scala and Akka

I am looking our for advices/criticism/suggestions from all of you to make sure I that either this plan is worthless or I have enough ideas to make an informed decision.

by daydreamer at May 22, 2015 04:21 AM

/r/compsci

any ideas on how to make the Page Rank matrix from google?

I've been procrastinating with this assignment because I have no idea where to start, I know the linear algebra behind it a little bit, and a few things about finite state machines. But i really dont have the slightest idea on where to start.Maybe you guys may know something about this

submitted by ImNoGayFish
[link] [4 comments]

May 22, 2015 04:19 AM

StackOverflow

How i can integrate Apache Spark with the Play Framework to display predictions in real time?

I'm doing some testing with Apache Spark, for my final project in college. I have a data set that I use to generate a decision tree, and make some predictions on new data.

In the future, I think to use this project into production, where I would generate a decision tree (batch processing), and through a web interface or a mobile application receives new data, making the prediction of the class of that entry, and inform the result instantly to the user. And also go storing these new entries for after a while generating a new decision tree (batch processing), and repeat this process continuously.

Despite the Apache Spark have the purpose of performing batch processing, there is the streaming API that allows you to receive real-time data, and in my application this data will only be used by a model built in a batch process with a decision tree, and how the prediction is quite fast, it allows the user to have the answer quickly.

My question is what are the best ways to integrate Apache Spark with a web application (plan to use the Play Framework scala version)?

by Douglas Arantes at May 22, 2015 04:15 AM

CompsciOverflow

Can a relatively small subset of random numbers be permuted and reused and still guarantee good expected running time for an algorithm like quicksort?

So this is sort of a general question but I'll limit the discussion to randomized quicksort to make it clear. Suppose generating "true" random bits is hard, e.g. because it requires measuring something in nature that can be considered essentially "random" like the 50th binary digit after the decimal point in wind speed at some location recorded in miles per hour. Or maybe quantum outcomes observed that can be considered truly random. Whatever. So we do the following: We generate $k$ "truly" random bits and then we re-use these $k$ bits over and over by using a pseudo-random number generator to permute them. In terms of $k$ (the number of initial truly random bits) and in terms of the total count of numbers to be sorted, $n$, and assuming the permutation algorithm of the $k$ initial random bits repeated over and over is known to an adversary, can we assert that an algorithm like quicksort will have good worst-case expected running time, assuming that "random" bits are used in the algorithm in the natural way to choose a pivot? How do $k$ and $n$ play into the worst-case expected running time? If we need $k = \Omega(n \log n)$ initial truly random bits to assure good worst case expected running time, that isn't very interesting. But maybe we can do somewhat ok with asymptotically fewer initial random bits?

by user2566092 at May 22, 2015 04:14 AM

/r/netsec

Wondermark

XKCD

StackOverflow

sbt Scalatest NoClassDefFoundError

I am Trying to run a Test using sbt and i am being thrown the java.lang.NoClassDefFoundError: scala/collection/GenTraversableOnce$class Environment : Unix, sbt, scalatest

I have pointed the scalalibrary to the one which i installed instead of the intellij plugin(which dint work either)

Project Build File name := "PleaseGod"

version := "1.0"

scalaVersion := "2.11.5"

libraryDependencies += "org.scalatest" % "scalatest_2.11" % "2.2.1" % "test"

libraryDependencies += "org.seleniumhq.selenium" % "selenium-java" % "2.35.0" % "test"

libraryDependencies += "org.scala-lang.modules" % "scala-xml_2.11" % "1.0.1"

Please refer the following link for the project structure : http://imgur.com/iugBPD6

by Automator at May 22, 2015 03:55 AM

Difference between Scala REPL and Clojure REPL - compile speed

I tried to run two factorial functions with the same algorithm, one in Scala, the other in Clojure:

// Scala:
def factorial(n:Int) = (1 to n).foldLeft(1: BigInt)(_*_)

--

;; Clojure:
(defn factorial [x]
  (reduce * (range 1N (inc x))))

The first time I enter the function to the REPL, the Clojure one evaluates (function definition, not calculating factorial) without any noticeable delay; while the scala one just paused for a short time. (Although very, very short, still noticeable.)

When I apply the function to calculate the factorial, both return the result very fast.

I would like to gain a basic understanding on the REPL. Are there any difference between the two REPL? Is Scala REPL a real REPL?

by Nick at May 22, 2015 03:43 AM

Will reading the Little Lisper help me in learning clojure?

I plan to pick up the clojure language.

I have at my disposal an old book:

The Little Lisper

I know there are more recent versions of it (The Little Schemer) but I'm wondering if it will help ease me into picking up Clojure. Or should I find other learning resource ?

by Frankie Ribery at May 22, 2015 03:10 AM

Changing map behaviour in Clojure

I need to modify map function behavior to provide mapping not with minimum collection size but with maximum and use zero for missing elements.

Standard behavior:

(map + [1 2 3] [4 5 6 7 8]) => [5 7 9]

Needed behavior:

(map + [1 2 3] [4 5 6 7 8]) => [5 7 9 7 8]

I wrote function to do this, but it seems not very extensible with varargs.

(defn map-ext [f coll1 coll2]
  (let [mx (max (count coll1) (count coll2))]
    (map f
     (concat coll1 (repeat (- mx (count coll1)) 0))
     (concat coll2 (repeat (- mx (count coll2)) 0)))))

Is there a better way to do this?

by mishadoff at May 22, 2015 02:25 AM

TheoryOverflow

Testing whether an analytic function vanishes identically

I have an application that basically reduces to testing whether a given function vanishes identically. The function is given symbolically, using unary and binary operators on complex numbers. For example, we might want to test the function $(z+1)^2-z^2-2z-1$. (It could be a function of more than one variable.)

The problem is known to be undecidable. However, there are reasonable heuristic approaches. The one I've been using involves numerical sampling. I just pick random complex values for $z$, and evaluate the function using machine-precision arithmetic. This works pretty well in most cases, and is efficient. If the function is known to be analytic, then the method would succeed with unit probability given infinite-precision arithmetic.

One could use a computer algebra system for this, but a CAS is computationally expensive and often will not reach any definite conclusion.

Although the numerical sampling heuristic generally works pretty well, it's not hard to come up with examples where it fails. For example, consider the function $z^{100}$. For almost any $|z|<1$, I get an underflow, and for almost any $|z|>1$, I get an overflow. Either way, the result is inconclusive. (An overflow doesn't automatically tell me the function is nonzero, because it could happen at some intermediate step in the computation.)

Can this heuristic be improved on? Using multiple-precision arithmetic rather than machine precision doesn't seem to be a big win. It's much more computationally expensive, and even if I take 100 digits of precision, it's still pretty easy to construct examples where it fails.

It seems like it might make sense to try some kind of adaptive algorithm in bad cases to search for regions of the complex plane that neither underflow nor overflow. Or more generally we could maintain error bounds, and search for regions where the error bounds do not make the result inconclusive.

Symbolic differentiation is cheap and doesn't fail, so I could also test whether the function's derivatives are zero. But this doesn't necessarily help for an example like $z^{100}$, unless I happen to get lucky and try the 100th derivative.

by Ben Crowell at May 22, 2015 02:24 AM

Wondermark

THIS WEEKEND: Vancouver! PLUS: Maker Faire Roll-a-Sketches!

VANCOUVER, BC (because comics)

THIS WEEKEND I’ll be in Vancouver, BC for the Vancouver Comic Arts Festival!

Held at the Roundhouse on Saturday and Sunday (May 23-24), admission is COMPLETELY FREE and so I expect to see ALL of you there.

Besides holding down the ol’ table, I’ll also be participating in these events:

INSPECTOR PANCAKES AND HIS TERRIBLE FRIENDS
Sunday, May 24th 12:00 PM – 12:45 PM

Hosted by Karla Pacheco
Featuring Kory Bing, Emily Horne, Jeph Jacques, David Malki !

To celebrate the world premiere of Inspector Pancakes Helps The President of France Solve The White Orchid Murders, creator Karla Pacheco, along with her terrible, terrible co-conspirators, planned to hold a panel. But her panel plans will have to wait, because it looks like one of her panellists … is a murderer! Armed with her wits, a group of ornery comics creators, and a team of adorable crack dog detectives, Karla Pacheco is on the case! But will you help her investigation, or hinder it?

QUICK ON THE DRAW 3: THE QUICKENING
Sunday, May 24th 4:00 PM – 4:45 PM

Hosted by The Fictionals
Featuring Ian Boothby, Jeff Ellis, David Malki !, JJ McCullough, Alina Pete

Join The Fictionals Comedy Co., Cloudscape Comics, and double-threats Ian Boothby and David Malki ! for a packed hour of comics & comedy in Quick on the Draw 3: The Quickening! This event combines the best of both worlds: improvisers perform for your amusement while artists draw their shenanigans. Featuring improv games, live drawing, and—most importantly—prizes! Audience participation is suggested, and fun is mandatory.


I had a great time at Maker Faire last week! PROBABLY the highlight was getting a 19th-century tintype portrait taken by the fine folks at Sonoma Tintype.

Tintype photography etches an image onto a metal plate. I had thought about getting a cool shot for a vintage-feeling author photo, or something.

INSTEAD what ENDED UP HAPPENING was a portrait of the 19th century’s most notorious pickax-murdering train robber:

i'm your huckleberry

SO GREAT. Everyone should get a tintype photo done, it’ll outlast any of your other worldly possessions.

Oh hey here’s some Roll-a-Sketches too!! I did a bunch, of course, but here’s an incomplete selection of my favorites from the weekend (click the images for bigger):

ROYALTY + WONDER WOMAN + GENIE + SKATES:

God save the queen

CAMPING + BODYBUILDER:

gotta work out those issues

TREE + HUNGRY + CHEERLEADER + BANANA:

defense defense

ROYALTY + UNICORN:

I say good neigh to you sir

BLIMP + RABBIT:

slowly. silently. persistently.

And of course ZOMBIE + MUSICAL + BANANA + WIZARD:

doo dee dee da dee doo doo

Thanks to the many fine folks who said hello and got to take home a Roll-a-Sketch!! See the rest of you in Vancouver!

lovely each and all

by David Malki at May 22, 2015 02:22 AM

arXiv Programming Languages

Synthesis through Unification. (arXiv:1505.05868v1 [cs.PL])

Given a specification and a set of candidate programs (program space), the program synthesis problem is to find a candidate program that satisfies the specification. We present the synthesis through unification (STUN) approach, which is an extension of the counter-example guided inductive synthesis (CEGIS) approach. In CEGIS, the synthesizer maintains a subset S of inputs and a candidate program Prog that is correct for S. The synthesizer repeatedly checks if there exists a counter-example input c such that the execution of Prog is incorrect on c. If so, the synthesizer enlarges S to include c, and picks a program from the program space that is correct for the new set S.

The STUN approach extends CEGIS with the idea that given a program Prog that is correct for a subset of inputs, the synthesizer can try to find a program Prog' that is correct for the rest of the inputs. If Prog and Prog' can be unified into a program in the program space, then a solution has been found. We present a generic synthesis procedure based on the STUN approach and specialize it for three different domains by providing the appropriate unification operators. We implemented these specializations in prototype tools, and we show that our tools often per- forms significantly better on standard benchmarks than a tool based on a pure CEGIS approach.

by <a href="http://arxiv.org/find/cs/1/au:+Alur_R/0/1/0/all/0/1">Rajeev Alur</a>, <a href="http://arxiv.org/find/cs/1/au:+Cerny_P/0/1/0/all/0/1">Pavol Cerny</a>, <a href="http://arxiv.org/find/cs/1/au:+Radhakrishna_A/0/1/0/all/0/1">Arjun Radhakrishna</a> at May 22, 2015 01:30 AM

On the Likelihood of Single-Peaked Preferences. (arXiv:1505.05852v1 [cs.GT])

This paper contains an extensive combinatorial analysis of the single-peaked domain restriction and investigates the likelihood that an election is single-peaked. We provide a very general upper bound result for domain restrictions that can be defined by certain forbidden configurations. This upper bound implies that many domain restrictions (including the single-peaked restriction) are very unlikely to appear in a random election chosen according to the Impartial Culture assumption. For single-peaked elections, this upper bound can be refined and complemented by a lower bound that is asymptotically tight. In addition, we provide exact results for elections with few voters or candidates. Moreover, we consider the Polya urn model and the Mallows model and obtain lower bounds showing that single-peakedness is more likely to appear under these probability distributions.

by <a href="http://arxiv.org/find/cs/1/au:+Bruner_M/0/1/0/all/0/1">Marie-Louise Bruner</a>, <a href="http://arxiv.org/find/cs/1/au:+Lackner_M/0/1/0/all/0/1">Martin Lackner</a> at May 22, 2015 01:30 AM

Quantifying Conformance using the Skorokhod Metric (full version). (arXiv:1505.05832v1 [cs.SY])

The conformance testing problem for dynamical systems asks, given two dynamical models (e.g., as Simulink diagrams), whether their behaviors are "close" to each other. In the semi-formal approach to conformance testing, the two systems are simulated on a large set of tests, and a metric, defined on pairs of real-valued, real-timed trajectories, is used to determine a lower bound on the distance. We show how the Skorkhod metric on continuous dynamical systems can be used as the foundation for conformance testing of complex dynamical models. The Skorokhod metric allows for both state value mismatches and timing distortions, and is thus well suited for checking conformance between idealized models of dynamical systems and their implementations. We demonstrate the robustness of the system conformance quantification by proving a \emph{transference theorem}: trajectories close under the Skorokhod metric satisfy "close" logical properties. Specifically, we show the result for the timed linear time logic \TLTL augmented with a rich class of temporal and spatial constraint predicates. We provide a window-based streaming algorithm to compute the Skorokhod metric, and use it as a basis for a conformance testing tool for Simulink. We experimentally demonstrate the effectiveness of our tool in finding discrepant behaviors on a set of control system benchmarks, including an industrial challenge problem.

by <a href="http://arxiv.org/find/cs/1/au:+Deshmukh_J/0/1/0/all/0/1">Jyotirmoy V. Deshmukh</a>, <a href="http://arxiv.org/find/cs/1/au:+Majumdar_R/0/1/0/all/0/1">Rupak Majumdar</a>, <a href="http://arxiv.org/find/cs/1/au:+Prabhu_V/0/1/0/all/0/1">Vinayak S. Prabhu</a> at May 22, 2015 01:30 AM

Opportunistic Human Observation Attacks: Perils in Designing Zero-Effort Deauthentication. (arXiv:1505.05779v1 [cs.CR])

Deauthentication is an important aspect of any authentication system. The widespread use of computing devices in daily life has underscored the need for zero-effort deauthentication schemes. However, the quest for eliminating user effort can lead to subtle security flaws in the authentication schemes. We show that a recently published zero-effort deauthentication scheme, ZEBRA, makes a hidden design assumption that can be exploited by an adversary employing an opportunistic strategy, i.e., mimicking only a selected set of activities. Via extensive experiments with an end-to-end real-time ZEBRA system we implemented, we show that our opportunistic attacker succeeds with a significantly higher probability compared to a n\"aive attacker (one who attempts to mimic all activities). By understanding the design flaw in ZEBRA as a case of tainted input, we show that we can draw on well-understood design principles to improve ZEBRA's security.

by <a href="http://arxiv.org/find/cs/1/au:+Huhta_O/0/1/0/all/0/1">O. Huhta</a>, <a href="http://arxiv.org/find/cs/1/au:+Shrestha_P/0/1/0/all/0/1">P. Shrestha</a>, <a href="http://arxiv.org/find/cs/1/au:+Udar_S/0/1/0/all/0/1">S. Udar</a>, <a href="http://arxiv.org/find/cs/1/au:+Saxena_N/0/1/0/all/0/1">N. Saxena</a>, <a href="http://arxiv.org/find/cs/1/au:+Asokan_N/0/1/0/all/0/1">N. Asokan</a> at May 22, 2015 01:30 AM

Network Coding Tree Algorithm for Multiple Access System. (arXiv:1505.05775v1 [cs.NI])

Network coding is famous for significantly improving the throughput of networks. The successful decoding of the network coded data relies on some side information of the original data. In that framework, independent data flows are usually first decoded and then network coded by relay nodes. If appropriate signal design is adopted, physical layer network coding is a natural way in wireless networks. In this work, a network coding tree algorithm which enhances the efficiency of the multiple access system (MAS) is presented. For MAS, existing works tried to avoid the collisions while collisions happen frequently under heavy load. By introducing network coding to MAS, our proposed algorithm achieves a better performance of throughput and delay. When multiple users transmit signal in a time slot, the mexed signals are saved and used to jointly decode the collided frames after some component frames of the network coded frame are received. Splitting tree structure is extended to the new algorithm for collision solving. The throughput of the system and average delay of frames are presented in a recursive way. Besides, extensive simulations show that network coding tree algorithm enhances the system throughput and decreases the average frame delay compared with other algorithms. Hence, it improves the system performance.

by <a href="http://arxiv.org/find/cs/1/au:+Chen_Z/0/1/0/all/0/1">Zhengchuan Chen</a>, <a href="http://arxiv.org/find/cs/1/au:+Xiong_K/0/1/0/all/0/1">Ke Xiong</a>, <a href="http://arxiv.org/find/cs/1/au:+Fan_P/0/1/0/all/0/1">Pingyi Fan</a>, <a href="http://arxiv.org/find/cs/1/au:+Chen_C/0/1/0/all/0/1">Chen Chen</a> at May 22, 2015 01:30 AM

On the Evaluation of GMSK Scheme with ECC Techniques in Wireless Sensor Network. (arXiv:1505.05755v1 [cs.NI])

Wireless sensor nodes are powered by batteries, for which replacement is very difficult. That is why, optimization of energy consumption is a major objective in the area of wireless sensor networks (WSNs).On the other hand, noisy channel has a prominent influence on the reliability of data transmission. Therefore, an energy efficient transmission strategy should be considered on the communication process of wireless nodes in order to obtain optimal energy network consumption. Indeed, the choice of suitable modulation format with the proper Error Correcting code (ECC) played a great responsibility to obtain better energy conservation.In this work, we aim to evaluate the performance analysis of Gaussian Minimum Shift Keying (GMSK) modulation with several combinations of coding strategies using various analysis metrics such as Bit Error Rate (BER), energy consumption.Through extensive simulation, we disclose that he gain achieved through GMSK modulation with suitable channel coding mechanism is promising to obtain reliable communication and energy conservation in WSN.

by <a href="http://arxiv.org/find/cs/1/au:+Anane_R/0/1/0/all/0/1">Rajoua Anane</a>, <a href="http://arxiv.org/find/cs/1/au:+Raoof_K/0/1/0/all/0/1">Kosai Raoof</a>, <a href="http://arxiv.org/find/cs/1/au:+Bouallegue_R/0/1/0/all/0/1">Ridha Bouallegue</a> at May 22, 2015 01:30 AM

Product-Mix Auctions and Tropical Geometry. (arXiv:1505.05737v1 [cs.GT])

In a recent and ongoing work, Baldwin and Klemperer found a connection between tropical geometry and economics. They gave a sufficient condition for the existence of competitive equilibrium in product-mix auctions of indivisible goods. This result, which we call the Unimodularity Theorem, can also be traced back to the work of Danilov, Koshevoy, and Murota. We introduce auction theory for a mathematical audience and give two simple proofs of the Unimodularity Theorem - one via tropical and discrete geometry, and one via integer programming. We also give connections to Walrasian economies, stable matching with transferable utilities, the theory of discrete convex analysis, and some well-known problems on lattice polytopes.

by <a href="http://arxiv.org/find/cs/1/au:+Tran_N/0/1/0/all/0/1">Ngoc Mai Tran</a>, <a href="http://arxiv.org/find/cs/1/au:+Yu_J/0/1/0/all/0/1">Josephine Yu</a> at May 22, 2015 01:30 AM

A Fast Network-Decomposition Algorithm and its Applications to Constant-Time Distributed Computation. (arXiv:1505.05697v1 [cs.DC])

A partition $(C_1,C_2,...,C_q)$ of $G = (V,E)$ into clusters of strong (respectively, weak) diameter $d$, such that the supergraph obtained by contracting each $C_i$ is $\ell$-colorable is called a strong (resp., weak) $(d, \ell)$-network-decomposition. Network-decompositions were introduced in a seminal paper by Awerbuch, Goldberg, Luby and Plotkin in 1989. Awerbuch et al. showed that strong $(exp\{O(\sqrt{ \log n \log \log n})\}, exp\{O(\sqrt{ \log n \log \log n})\})$-network-decompositions can be computed in distributed deterministic time $exp\{O(\sqrt{ \log n \log \log n})\}$.

The result of Awerbuch et al. was improved by Panconesi and Srinivasan in 1992: in the latter result $d = \ell = exp\{O(\sqrt{\log n})\}$, and the running time is $exp\{O(\sqrt{\log n})\}$ as well. Much more recently Barenboim (2012) devised a distributed randomized constant-time algorithm for computing strong network decompositions with $d = O(1)$. However, the parameter $\ell$ in his result is $O(n^{1/2 + \epsilon})$.

In this paper we drastically improve the result of Barenboim and devise a distributed randomized constant-time algorithm for computing strong $(O(1), O(n^{\epsilon}))$-network-decompositions. As a corollary we derive a constant-time randomized $O(n^{\epsilon})$-approximation algorithm for the distributed minimum coloring problem, improving the previously best-known $O(n^{1/2 + \epsilon})$ approximation guarantee. We also derive other improved distributed algorithms for a variety of problems.

Most notably, for the extremely well-studied distributed minimum dominating set problem currently there is no known deterministic polylogarithmic-time algorithm. We devise a {deterministic} polylogarithmic-time approximation algorithm for this problem, addressing an open problem of Lenzen and Wattenhofer (2010).

by <a href="http://arxiv.org/find/cs/1/au:+Barenboim_L/0/1/0/all/0/1">Leonid Barenboim</a>, <a href="http://arxiv.org/find/cs/1/au:+Elkin_M/0/1/0/all/0/1">Michael Elkin</a>, <a href="http://arxiv.org/find/cs/1/au:+Gavoille_C/0/1/0/all/0/1">Cyril Gavoille</a> at May 22, 2015 01:30 AM

GPGPU Based Parallelized Client-Server Framework for Providing High Performance Computation Support. (arXiv:1505.05655v1 [cs.DC])

Parallel data processing has become indispensable for processing applications involving huge data sets. This brings into focus the Graphics Processing Units (GPUs) which emphasize on many-core computing. With the advent of General Purpose GPUs (GPGPU), applications not directly associated with graphics operations can also harness the computation capabilities of GPUs. Hence, it would be beneficial if the computing capabilities of a given GPGPU could be task optimized and made available. This paper describes a client-server framework in which users can choose a processing task and submit large data-sets for processing to a remote GPGPU and receive the results back, using well defined interfaces. The framework provides extensibility in terms of the number and type of tasks that the client can choose or submit for processing at the remote GPGPU server machine, with complete transparency to the underlying hardware and operating systems. Parallelization of user-submitted tasks on the GPGPU has been achieved using NVIDIA Compute Unified Device Architecture (CUDA).

by <a href="http://arxiv.org/find/cs/1/au:+Banerjee_P/0/1/0/all/0/1">Poorna Banerjee</a>, <a href="http://arxiv.org/find/cs/1/au:+Dave_A/0/1/0/all/0/1">Amit Dave</a> at May 22, 2015 01:30 AM

A mechanized proof of loop freedom of the (untimed) AODV routing protocol. (arXiv:1505.05646v1 [cs.NI])

The Ad hoc On-demand Distance Vector (AODV) routing protocol allows the nodes in a Mobile Ad hoc Network (MANET) or a Wireless Mesh Network (WMN) to know where to forward data packets. Such a protocol is 'loop free' if it never leads to routing decisions that forward packets in circles. This paper describes the mechanization of an existing pen-and-paper proof of loop freedom of AODV in the interactive theorem prover Isabelle/HOL. The mechanization relies on a novel compositional approach for lifting invariants to networks of nodes. We exploit the mechanization to analyse several improvements of AODV and show that Isabelle/HOL can re-establish most proof obligations automatically and identify exactly the steps that are no longer valid.

by <a href="http://arxiv.org/find/cs/1/au:+Bourke_T/0/1/0/all/0/1">Timothy Bourke</a> (INRIA), <a href="http://arxiv.org/find/cs/1/au:+Glabbeek_R/0/1/0/all/0/1">Robert J. van Glabbeek</a> (NICTA), <a href="http://arxiv.org/find/cs/1/au:+Hofner_P/0/1/0/all/0/1">Peter H&#xf6;fner</a> (NICTA) at May 22, 2015 01:30 AM

Modelling the combined effects of land use and climatic changes: coupling bioclimatic modelling with markov-chain cellular automata in a case study in Cyprus. (arXiv:1505.05644v1 [q-bio.PE])

Two endemic plant species in the Mediterranean island of Cyprus, Crocus cyprius and Ophrys kotschyi, were used as a case study. We have coupled climate change scenarios, and land use change models with species distribution models. Future land use scenarios were modelled by initially calculating the rate of current land use changes between two time snapshots (2000 and 2006) on the island, and based on these transition probabilities markov-chain cellular automata were used to generate future land use changes for 2050. Climate change scenarios A1B, A2, B1 and B2A were derived from the IPCC reports. Species climatic preferences were derived from their current distributions using classification trees while habitats preferences were derived from the Red Data Book of the Flora of Cyprus. A bioclimatic model for Crocus cyprius was built using mean temperature of wettest quarter, max temperature of warmest month and precipitation seasonality, while for Ophrys kotchyi the bioclimatic model was built using precipitation of wettest month, mean temperature of warmest quarter, isothermality, precipitation of coldest quarter, and annual precipitation. Sequentially, simulation scenarios were performed regarding future species distributions by accounting climate alone and both climate and land use changes. The distribution of the two species resulting from the bioclimatic models was then filtered by future land use changes, providing the species projected potential distribution. The species projected potential distribution varies depending on the type and scenario used, but many of both species current sites/locations are projected to be outside their future potential distribution. Our results demonstrate the importance of including both land use and climatic changes in predictive species modeling.

by <a href="http://arxiv.org/find/q-bio/1/au:+Louca_M/0/1/0/all/0/1">Marianna Louca</a>, <a href="http://arxiv.org/find/q-bio/1/au:+Vogiatzakis_I/0/1/0/all/0/1">Ioannis N. Vogiatzakis</a>, <a href="http://arxiv.org/find/q-bio/1/au:+Moustakas_A/0/1/0/all/0/1">Aristides Moustakas</a> at May 22, 2015 01:30 AM

A generalization of Kung's theorem. (arXiv:1505.05628v1 [cs.IT])

We give a generalization of Kung's theorem on critical exponents of linear codes over a finite field, in terms of sums of extended weight polynomials of linear codes. For all i=k+1,...,n, we give an upper bound on the smallest integer m such that there exist m codewords whose union of supports has cardinality at least i.

by <a href="http://arxiv.org/find/cs/1/au:+Johnsen_T/0/1/0/all/0/1">Trygve Johnsen</a>, <a href="http://arxiv.org/find/cs/1/au:+Shiromoto_K/0/1/0/all/0/1">Keisuke Shiromoto</a>, <a href="http://arxiv.org/find/cs/1/au:+Verdure_H/0/1/0/all/0/1">Hugues Verdure</a> at May 22, 2015 01:30 AM

Parallel Streaming Signature EM-tree: A Clustering Algorithm for Web Scale Applications. (arXiv:1505.05613v1 [cs.IR])

The proliferation of the web presents an unsolved problem of automatically analyzing billions of pages of natural language. We introduce a scalable algorithm that clusters hundreds of millions of web pages into hundreds of thousands of clusters. It does this on a single mid-range machine using efficient algorithms and compressed document representations. It is applied to two web-scale crawls covering tens of terabytes. ClueWeb09 and ClueWeb12 contain 500 and 733 million web pages and were clustered into 500,000 to 700,000 clusters. To the best of our knowledge, such fine grained clustering has not been previously demonstrated. Previous approaches clustered a sample that limits the maximum number of discoverable clusters. The proposed EM-tree algorithm uses the entire collection in clustering and produces several orders of magnitude more clusters than the existing algorithms. Fine grained clustering is necessary for meaningful clustering in massive collections where the number of distinct topics grows linearly with collection size. These fine-grained clusters show an improved cluster quality when assessed with two novel evaluations using ad hoc search relevance judgments and spam classifications for external validation. These evaluations solve the problem of assessing the quality of clusters where categorical labeling is unavailable and unfeasible.

by <a href="http://arxiv.org/find/cs/1/au:+Vries_C/0/1/0/all/0/1">Christopher M. de Vries</a>, <a href="http://arxiv.org/find/cs/1/au:+Vine_L/0/1/0/all/0/1">Lance De Vine</a>, <a href="http://arxiv.org/find/cs/1/au:+Geva_S/0/1/0/all/0/1">Shlomo Geva</a>, <a href="http://arxiv.org/find/cs/1/au:+Nayak_R/0/1/0/all/0/1">Richi Nayak</a> at May 22, 2015 01:30 AM

Millimeter Wave Beamforming Based on WiFi Fingerprinting in Indoor Environment. (arXiv:1505.05579v1 [cs.NI])

Millimeter Wave (mm-w), especially the 60 GHz band, has been receiving much attention as a key enabler for the 5G cellular networks. Beamforming (BF) is tremendously used with mm-w transmissions to enhance the link quality and overcome the channel impairments. The current mm-w BF mechanism, proposed by the IEEE 802.11ad standard, is mainly based on exhaustive searching the best transmit (TX) and receive (RX) antenna beams. This BF mechanism requires a very high setup time, which makes it difficult to coordinate a multiple number of mm-w Access Points (APs) in mobile channel conditions as a 5G requirement. In this paper, we propose a mm-w BF mechanism, which enables a mm-w AP to estimate the best beam to communicate with a User Equipment (UE) using statistical learning. In this scheme, the fingerprints of the UE WiFi signal and mm-w best beam identification (ID) are collected in an offline phase on a grid of arbitrary learning points (LPs) in target environments. Therefore, by just comparing the current UE WiFi signal with the pre-stored UE WiFi fingerprints, the mm-w AP can immediately estimate the best beam to communicate with the UE at its current position. The proposed mm-w BF can estimate the best beam, using a very small setup time, with a comparable performance to the exhaustive search BF.

by <a href="http://arxiv.org/find/cs/1/au:+Mohamed_E/0/1/0/all/0/1">Ehab Mahmoud Mohamed</a>, <a href="http://arxiv.org/find/cs/1/au:+Sakaguchi_K/0/1/0/all/0/1">Kei Sakaguchi</a>, <a href="http://arxiv.org/find/cs/1/au:+Sampei_S/0/1/0/all/0/1">Seiichi Sampei</a> at May 22, 2015 01:30 AM

Fast exact summation using small and large superaccumulators. (arXiv:1505.05571v1 [cs.NA])

I present two new methods for exactly summing a set of floating-point numbers, and then correctly rounding to the nearest floating-point number. Higher accuracy than simple summation (rounding after each addition) is important in many applications, such as finding the sample mean of data. Exact summation also guarantees identical results with parallel and serial implementations, since the exact sum is independent of order. The new methods use variations on the concept of a "superaccumulator" - a large fixed-point number that can exactly represent the sum of any reasonable number of floating-point values. One method uses a "small" superaccumulator with sixty-seven 64-bit chunks, each with 32-bit overlap with the next chunk, allowing carry propagation to be done infrequently. The small superaccumulator is used alone when summing a small number of terms. For big summations, a "large" superaccumulator is used as well. It consists of 4096 64-bit chunks, one for every possible combination of exponent bits and sign bit, plus counts of when each chunk needs to be transferred to the small superaccumulator. To add a term to the large superaccumulator, only a single chunk and its associated count need to be updated, which takes very few instructions if carefully implemented. On modern 64-bit processors, exactly summing a large array using this combination of large and small superaccumulators takes less than twice the time of simple, inexact, ordered summation, with a serial implementation. A parallel implementation using a small number of processor cores can be expected to perform exact summation of large arrays at a speed that reaches the limit imposed by memory bandwidth. Some common methods that attempt to improve accuracy without being exact may therefore be pointless, at least for large summations, since they are slower than computing the sum exactly.

by <a href="http://arxiv.org/find/cs/1/au:+Neal_R/0/1/0/all/0/1">Radford M. Neal</a> at May 22, 2015 01:30 AM

Protection and Deception: Discovering Game Theory and Cyber Literacy through a Novel Board Game Experience. (arXiv:1505.05570v1 [cs.CY])

Cyber literacy merits serious research attention because it addresses a confluence of specialization and generalization; cybersecurity is often conceived of as approachable only by a technological intelligentsia, yet its interdependent nature demands education for a broad population. Therefore, educational tools should lead participants to discover technical knowledge in an accessible and attractive framework. In this paper, we present Protection and Deception (P&G), a novel two-player board game. P&G has three main contributions. First, it builds cyber literacy by giving participants "hands-on" experience with game pieces that have the capabilities of cyber-attacks such as worms, masquerading attacks/spoofs, replay attacks, and Trojans. Second, P&G teaches the important game-theoretic concepts of asymmetric information and resource allocation implicitly and non-obtrusively through its game play. Finally, it strives for the important objective of security education for underrepresented minorities and people without explicit technical experience. We tested P&G at a community center in Manhattan with middle- and high school students, and observed enjoyment and increased cyber literacy along with suggestions for improvement of the game. Together with these results, our paper also presents images of the attractive board design and 3D printed game pieces, together with a Monte-Carlo analysis that we used to ensure a balanced gaming experience.

by <a href="http://arxiv.org/find/cs/1/au:+Zahir_S/0/1/0/all/0/1">Saboor Zahir</a>, <a href="http://arxiv.org/find/cs/1/au:+Pak_J/0/1/0/all/0/1">John Pak</a>, <a href="http://arxiv.org/find/cs/1/au:+Singh_J/0/1/0/all/0/1">Jatinder Singh</a>, <a href="http://arxiv.org/find/cs/1/au:+Pawlick_J/0/1/0/all/0/1">Jeffrey Pawlick</a>, <a href="http://arxiv.org/find/cs/1/au:+Zhu_Q/0/1/0/all/0/1">Quanyan Zhu</a> at May 22, 2015 01:30 AM

Short Proofs of the Kneser-Lov\'asz Coloring Principle. (arXiv:1505.05531v1 [math.LO])

We prove that the propositional translations of the Kneser-Lov\'asz theorem have polynomial size extended Frege proofs and quasi-polynomial size Frege proofs. We present a new counting-based combinatorial proof of the Kneser-Lov\'asz theorem that avoids the topological arguments of prior proofs for all but finitely many cases for each k. We introduce a miniaturization of the octahedral Tucker lemma, called the truncated Tucker lemma: it is open whether its propositional translations have (quasi-)polynomial size Frege or extended Frege proofs.

by <a href="http://arxiv.org/find/math/1/au:+Aisenberg_J/0/1/0/all/0/1">James Aisenberg</a>, <a href="http://arxiv.org/find/math/1/au:+Bonet_M/0/1/0/all/0/1">Maria Luisa Bonet</a>, <a href="http://arxiv.org/find/math/1/au:+Buss_S/0/1/0/all/0/1">Sam Buss</a>, <a href="http://arxiv.org/find/math/1/au:+Craciun_A/0/1/0/all/0/1">Adrian Cr&#xe3;ciun</a>, <a href="http://arxiv.org/find/math/1/au:+Istrate_G/0/1/0/all/0/1">Gabriel Istrate</a> at May 22, 2015 01:30 AM

/r/scala

[Hiring] Looking for a Scala developer in or wanting to move to Seattle, Ask Us Anything!

The business side:

Job Summary

It’s a great time to be an engineer at Avalara. Come to our brand new blazing orange office in downtown Seattle and live the island life as we bring our worldwide tax calculation engine to every corner of the Galaxy.

Ask yourself, why am I toiling away using antiquated tools, and supporting ancient software, instead of creating platforms that will change the world? Why am I not spending my days collaborating with engineers from my favorite companies on secret projects I can’t even tell my spouse about?

Come join the CloudConnect Team at Avalara. It’s sunny over here and we have the only office Tiki bar in Seattle. You will be joining a recently formed SkunkWorks team focused on building and shipping a unique hardware and software platform to allow Avalara to reach new markets around the world. Being a part of the CloudConnect Team is a rare experience, blending hardware, operating system and software development skills in a tightly focused, fast paced and creative race to the finish line.

Job Duties

  • Design and build great software
  • Participate in the engineering of unique hardware platforms
  • Always be shipping
  • Contribute your energy and creativity to building a great team

Qualifications

  • Bachelor's Degree or Equivalent Experience
  • 3-5+ Years Back End Development Experience in Java or C#
  • Experience with virtual machines and multiple hypervisors
  • Experience with building and troubleshooting server hardware
  • Linux server administration experience
  • Experience with Java, Bash
  • Experience with VMware, VirtualBox, Xen
  • Ready to work on a stealth project for high profile customers

Preferred Qualification

  • Functional language experience (F#, Scala, Haskell, Ruby, or JavaScript)
  • Experience with Scala, Play Framework, Akka, AMQP, or ActiveMQ
  • Experience with continuous integration tools like Jenkins
  • Experience with AWS administration
  • Experience with JUnit
  • Infrastructure and DevOps Support
  • Contributed to a major open source project
  • You have your own botnet
  • Most of your wardrobe is orange

About Avalara

Avalara helps businesses of all sizes achieve compliance with sales tax, excise tax, and other transactional tax requirements by delivering comprehensive, automated, cloud-based solutions that are fast, accurate and easy to use. Avalara’s end-to-end suite of solutions is designed to effectively manage complicated and burdensome tax compliance obligations imposed by state, local, and other taxing authorities in the United States and internationally.

Avalara offers hundreds of pre-built connectors into leading accounting, ERP, ecommerce and other business applications. The company processes millions of tax transactions for customers and free users every day, files hundreds of thousands of transactional tax returns per year, and manages millions of exemption certificates and other compliance related documents.

A privately held company, Avalara’s venture capital investors include Sageview Capital, Battery Ventures, Warburg Pincus, Arthur Ventures, and other institutional and individual investors. Avalara employs more than 750 people at its headquarters on Bainbridge Island, WA and in offices across the U.S. and in London, England and Pune, India. More information at: www.avalara.com

The personal side:

This is a project that my team and I have poured our blood, sweat and tears into for a number of years, and we're looking to add another adventurous developer to our team. Ask us anything!

You can message me directly or apply on our website at http://www.avalara.com/job-details/?id=o4yK0fw6&title=Core+Platform+Engineer%2C+Cloud+Connect+Team&location=Seattle%2C+WA%2C+United+States

submitted by avaengineer
[link] [12 comments]

May 22, 2015 01:12 AM

StackOverflow

Scala inheritance in function signature

I have

trait X {

}

class Y extends X

Trait A{ def run(x:X){ /////// } }

Class B extends A{ def run(y:Y) }

However scala complains at B's run function

I am confused on how method signature works inheritance. class B should have a method of signature X, but Type Y is a Type X.

Thanks

by user1505108 at May 22, 2015 01:02 AM

QuantOverflow

Is the Binomial Tree Model not self-financing?

Consider a 2-period binomial tree where the derivative price is $f$ and the stock price is $S$. Also, let the bond be deterministic with continuous growth rate $r$ and initial value $B_0$. binomial tree

Recall the replicating strategy is at each time $t_i$ hold $\phi_i = \frac{f_{i+1}^{up} - f_{i+1}^{down}}{S_{i+1}^{up} - S_{i+1}^{down}}$ units of the stock and $\psi_i = B_0^{-1} e^{-r(i+1)\Delta t}(f_{i+1}^{up} - \phi_i S_{i+1}^{up})$ units of the bond. In particular, the value of the portfolio at time $0$ is $V_0 = \phi_0 S_0 + \psi_0 B_0$. When we arrive at time tick 1, lets say our stock price went up to $S_3$. Before rebalancing, our portfolio is worth $V_0|_{end} = \phi_0 S_3 + \psi_0 B_0e^{r \Delta t}$, and after rebalancing it is $V_1 = \phi_1 S_3 + \psi_1 B_0e^{r \Delta t}$. In order for this to be self-financing, we must have $V_1 - V_0|_{end} = 0$. However, \begin{align*} V_1 - V_0|_{end} & = (\phi_1 - \phi_0)S_3 + (\psi_1 - \psi_0)B_0e^{r \Delta t} \\ & = (\phi_1 - \phi_0)S_3 + \left(B_0^{-1} e^{-2r\Delta t}(f_7 - \phi_1 S_{7}) - B_0^{-1} e^{-r\Delta t}(f_{3} - \phi_0 S_{3})\right)B_0e^{r \Delta t} \\ & = (\phi_1 - \phi_0)S_3 + e^{-r\Delta t}(f_7 - \phi_1 S_{7}) - (f_{3} - \phi_0 S_{3}) \\ & = \phi_1 S_3 + e^{-r\Delta t}(f_7 - \phi_1 S_{7}) - f_{3} \\ & = \frac{f_{7} - f_{6}}{S_7 - S_6} S_3 + e^{-r\Delta t}(f_7 - \frac{f_{7} - f_{6}}{S_7 - S_6} S_{7}) - f_{3} \\ & = \frac{1}{S_7 - S_6} \left((f_{7} - f_{6})S_3 + (S_7 - S_6)e^{-r\Delta t}f_7 - e^{-r\Delta t}(f_{7} - f_{6}) S_{7} - (S_7 - S_6)f_{3} \right)\\ & = \frac{1}{S_7 - S_6} \left((f_{7} - f_{6})S_3 - S_6e^{-r\Delta t}f_7 + e^{-r\Delta t}f_{6} S_{7} - (S_7 - S_6)f_{3} \right)\\ & \neq 0. \end{align*}

It seems a lot of effort is put into self-financing strategies, and in fact the binomial representation theorem is used to prove the existence of them in the binomial model. Am I missing something?

by bcf at May 22, 2015 01:01 AM

Planet Theory

Constructing Intrinsic Delaunay Triangulations from the Dual of Geodesic Voronoi Diagrams

Authors: Yong-Jin Liu, Chun-Xu Xu, Dian Fan, Ying He
Download: PDF
Abstract: Intrinsic Delaunay triangulation (IDT) is a fundamental data structure in computational geometry and computer graphics. However, except for some theoretical results, such as existence and uniqueness, little progress has been made towards computing IDT on simplicial surfaces. To date the only way for constructing IDTs is the edge-flipping algorithm, which iteratively flips the non-Delaunay edge to be locally Delaunay. Although the algorithm is conceptually simple and guarantees to stop in finite steps, it has no known time complexity. Moreover, the edge-flipping algorithm may produce non-regular triangulations, which contain self-loops and/or faces with only two edges. In this paper, we propose a new method for constructing IDT on manifold triangle meshes. Based on the duality of geodesic Voronoi diagrams, our method can guarantee the resultant IDTs are regular. Our method has a theoretical worst-case time complexity $O(n^2\log n)$ for a mesh with $n$ vertices. We observe that most real-world models are far from their Delaunay triangulations, thus, the edge-flipping algorithm takes many iterations to fix the non-Delaunay edges. In contrast, our method is non-iterative and insensitive to the number of non-Delaunay edges. Empirically, it runs in linear time $O(n)$ on real-world models.

May 22, 2015 12:57 AM

QuantOverflow

Zero coupon bonds [on hold]

Assume the zero-coupon bonds from 1 year to 4 years are all available, and the current 1-year, 2-year, 3-year and 4-year spot rates are 4%, 5%, 6% and 7% accordingly. Interest rates are annually compounded. You want to lock in a 1-year interest rate beginning in 3 years, by using some of the zero-coupon bonds above.

Question: ) Which zero-coupon bonds would you use?

And what is the locked-in 1-year rate beginning in 3 years?

by user3238961 at May 22, 2015 12:54 AM

StackOverflow

Play with Akka Logging

I'm using the Play Actor System to run a few actors. I'd like to add logging which is sent to files, but for whatever reason it isn't working. Here are the details:

My actors roughly look like this:

package actors

class My Actor() extends Actor with ActorLogging {
    def receice = {
        case Foo => log.info("Foo was sent")
    }
}

application.conf:

akka {
    logLevel = "DEBUG"
    loggers = ["akka.event.slf4j.Slf4jLogger"]
    debug {
        autoreceive = on
        lifecycle = on
    }
}

I have added in my build.sbt file:

"com.typesafe.akka" %% "akka-slf4j" % "2.3.11"

And as documented here I added this to my logger.xml:

<logger name="akka" level="INFO" />

My logger.xml file is otherwise a copy of the default configuration for play.

How can I get logging working properly for my actors? STDOUT isn't even working in this case.

I dont wan't to use play.api.Logger since it blocks.

by Jonathan Boudreau at May 22, 2015 12:54 AM

DragonFly BSD Digest

BSDNow: 090: ZFS Armistice

This week’s BSDNow episode talks with Jed Reynolds about ZFS on Linux and FreeBSD, and includes other news bits including about DragonFly’s swap encryption, OpenBSD defaulting to having openntpd on by default, and plenty more.

by Justin Sherrill at May 22, 2015 12:51 AM

Planet Theory

Extended fast search clustering algorithm: widely density clusters, no density peaks

Authors: Wenkai Zhang, Jing Li
Download: PDF
Abstract: CFSFDP (clustering by fast search and find of density peaks) is recently developed density-based clustering algorithm. Compared to DBSCAN, it needs less parameters and is computationally cheap for its non-iteration. Alex. at al have demonstrated its power by many applications. However, CFSFDP performs not well when there are more than one density peak for one cluster, what we name as "no density peaks". In this paper, inspired by the idea of a hierarchical clustering algorithm CHAMELEON, we propose an extension of CFSFDP,E_CFSFDP, to adapt more applications. In particular, we take use of original CFSFDP to generating initial clusters first, then merge the sub clusters in the second phase. We have conducted the algorithm to several data sets, of which, there are "no density peaks". Experiment results show that our approach outperforms the original one due to it breaks through the strict claim of data sets.

May 22, 2015 12:46 AM

Better Distance Preservers and Additive Spanners

Authors: Greg Bodwin, Virginia Vassilevska Williams
Download: PDF
Abstract: We make improvements to the current upper bounds on pairwise distance preservers and additive spanners.

A {\em distance preserver} is a sparse subgraph that exactly preserves all distances in a given pair set $P$. We show that every undirected unweighted graph has a distance preserver on $O(\max\{n|P|^{1/3}, n^{2/3}|P|^{2/3}\})$ edges, and we conjecture that $O(n^{2/3}|P|^{2/3} + n)$ is possible.

An {\em additive subset spanner} is a sparse subgraph that preserves all distances in $S \times S$ for a node subset $S$ up to a small additive error function. Our second contribution is a new application of distance preservers to graph clustering algorithms, and an application of this clustering algorithm to produce new subset spanners. Ours are the first subset spanners that benefit from a non-constant error allowance. For constant $d$, we show that subset spanners with $+n^{d + o(1)}$ error can be obtained at any of the following sparsity levels:

[table omitted]

An {\em additive spanner} is a subset spanner on $V \times V$. The existence of error sensitive subset spanners was an open problem that formed a bottleneck in additive spanner constructions. By resolving this problem, we are able to prove that for any constant $d$, there are additive spanners with $+n^{d + o(1)}$ error at any of the following sparsity levels:

[table omitted]

If our distance preserver conjecture is true, then the fourth additive spanner is the best known for the entire range $d \in (0, 3/7]$. Otherwise, the first is the best known for $d \in [1/3, 3/7]$, the second is the best known for $d \in [3/13, 1/3]$, and and the third is the best known for $d \in (0, 3/13]$.

As an auxiliary result, we prove that all graphs have $+6$ pairwise spanners on $\Oish(n|P|^{1/4})$ edges.

May 22, 2015 12:46 AM

Graph colouring algorithms

Authors: Thore Husfeldt
Download: PDF
Abstract: This chapter presents an introduction to graph colouring algorithms. The focus is on vertex-colouring algorithms that work for general classes of graphs with worst-case performance guarantees in a sequential model of computation. The presentation aims to demonstrate the breadth of available techniques and is organized by algorithmic paradigm.

May 22, 2015 12:46 AM

Graph edit distance : a new binary linear programming formulation

Authors: Julien Lerouge, Zeina Abu-Aisheh, Romain Raveaux, Pierre Héroux, Sébastien Adam
Download: PDF
Abstract: Graph edit distance (GED) is a powerful and flexible graph matching paradigm that can be used to address different tasks in structural pattern recognition, machine learning, and data mining. In this paper, some new binary linear programming formulations for computing the exact GED between two graphs are proposed. A major strength of the formulations lies in their genericity since the GED can be computed between directed or undirected fully attributed graphs (i.e. with attributes on both vertices and edges). Moreover, a relaxation of the domain constraints in the formulations provides efficient lower bound approximations of the GED. A complete experimental study comparing the proposed formulations with 4 state-of-the-art algorithms for exact and approximate graph edit distances is provided. By considering both the quality of the proposed solution and the efficiency of the algorithms as performance criteria, the results show that none of the compared methods dominates the others in the Pareto sense. As a consequence, faced to a given real-world problem, a trade-off between quality and efficiency has to be chosen w.r.t. the application constraints. In this context, this paper provides a guide that can be used to choose the appropriate method.

May 22, 2015 12:46 AM

Corruption Detection on Networks

Authors: Noga Alon, Elchanan Mossel, Robin Pemantle
Download: PDF
Abstract: We consider the problem of corruption detection on networks. We show that graph expansion is necessary for corruption detection and discuss algorithms and the computational hardness of the problem.

May 22, 2015 12:40 AM

Very Sparse Additive Spanners and Emulators

Authors: Greg Bodwin, Virginia Vassilevska Williams
Download: PDF
Abstract: We obtain new upper bounds on the additive distortion for graph emulators and spanners on relatively few edges. We introduce a new subroutine called "strip creation," and we combine this subroutine with several other ideas to obtain the following results:

\item Every graph has a spanner on $O(n^{1+\epsilon})$ edges with $\tilde{O}(n^{1/2 - \epsilon/2})$ additive distortion, for arbitrary $\epsilon\in [0,1]$. \item Every graph has an emulator on $\tilde{O}(n^{1 + \epsilon})$ edges with $\tilde{O}(n^{1/3 - 2\epsilon/3})$ additive distortion whenever $\epsilon \in [0, \frac{1}{5}]$. \item Every graph has a spanner on $\tilde{O}(n^{1 + \epsilon})$ edges with $\tilde{O}(n^{2/3 - 5\epsilon/3})$ additive distortion whenever $\epsilon \in [0, \frac{1}{4}]$.

Our first spanner has the new best known asymptotic edge-error tradeoff for additive spanners whenever $\epsilon \in [0, \frac{1}{7}]$. Our second spanner has the new best tradeoff whenever $\epsilon \in [\frac{1}{7}, \frac{3}{17}]$. Our emulator has the new best asymptotic edge-error tradeoff whenever $\epsilon \in [0, \frac{1}{5}]$.

May 22, 2015 12:40 AM

Complexity Theoretic Limitations on Learning Halfspaces

Authors: Amit Daniely
Download: PDF
Abstract: We study the problem of agnostically learning halfspaces which is defined by a fixed but unknown distribution $\mathcal{D}$ on $\mathbb{Q}^n\times \{\pm 1\}$. We define $\mathrm{Err}_{\mathrm{HALF}}(\mathcal{D})$ as the least error of a halfspace classifier for $\mathcal{D}$. A learner who can access $\mathcal{D}$ has to return a hypothesis whose error is small compared to $\mathrm{Err}_{\mathrm{HALF}}(\mathcal{D})$.

Using the recently developed method of the author, Linial and Shalev-Shwartz we prove hardness of learning results under a natural assumption on the complexity of refuting random $K$-$\mathrm{XOR}$ formulas. We show that no efficient learning algorithm has non-trivial worst-case performance even under the guarantees that $\mathrm{Err}_{\mathrm{HALF}}(\mathcal{D}) \le \eta$ for arbitrarily small constant $\eta>0$, and that $\mathcal{D}$ is supported in $\{\pm 1\}^n\times \{\pm 1\}$. Namely, even under these favorable conditions its error must be $\ge \frac{1}{2}-\frac{1}{n^c}$ for every $c>0$. In particular, no efficient algorithm can achieve a constant approximation ratio. Under a stronger version of the assumption (where $K$ can be poly-logarithmic in $n$), we can take $\eta = 2^{-\log^{1-\nu}(n)}$ for arbitrarily small $\nu>0$. Interestingly, this is even stronger than the best known lower bounds (Arora et. al. 1993, Feldamn et. al. 2006, Guruswami and Raghavendra 2006) for the case that the learner is restricted to return a halfspace classifier (i.e. proper learning).

May 22, 2015 12:40 AM

/r/netsec

StackOverflow

mapPartitionsWithIndex giving a diferent header

I have some csv files such as this:

ç
NU_NOTIF,CoordX_UTMSAD69,CoordY_UTMSAD69,TP_NOT,ID_AGRAVO,DT_NOTIFIC,SEM_NOT,NU_ANO,SG_UF_NOT,ID_UNIDADE,DT_SIN_PRI,SEM_PRI,CS_RACA,CS_ESCOL_N,ID_CNS_SUS,NDUPLIC_N,DT_DIGITA,DT_TRANSUS,DT_TRANSDM,DT_TRANSSM,DT_TRANSRM,DT_TRANSRS,DT_TRANSSE,NU_LOTE_V,NU_LOTE_H,CS_FLXRET,FLXRECEBI,IDENT_MICR,MIGRADO_W,DT_INVEST,ID_OCUPA_N,DT_SORO,RESUL_SORO,DT_NS1,RESUL_NS1,DT_VIRAL,RESUL_VI_N,DT_PCR,RESUL_PCR_,SOROTIPO,HISTOPA_N,IMUNOH_N,DOENCA_TRA,EPISTAXE,GENGIVO,METRO,PETEQUIAS,HEMATURA,SANGRAM,LACO_N,PLASMATICO,EVIDENCIA,PLAQ_MENOR,TP_SISTEMA,Long_WGS84,Lat_WGS84
2332769,"677873,18","7468220,51",2,A90,29/01/2010 00:00:00,201004,2010,33,2273225,11/01/2010 00:00:00,201002,9,03, , ,26/02/2010 00:00:00,,,16/11/2010 00:00:00,,,,2010041, , , , , ,29/01/2010 00:00:00, ,18/01/2010 00:00:00,1,, ,,4,,4, ,4,4,2, , , , , , , , , ,0.000000000000000,1,"-43.266430481500002","-22.884869715699999"
2273294,"676608,79","7467659,4",2,A90,22/01/2010 00:00:00,201003,2010,33,2708167,21/01/2010 00:00:00,201003,9,09, , ,04/02/2010 00:00:00,,,16/11/2010 00:00:00,,,,2010041, , , , , ,, ,, ,, ,, ,, , , , , , , , , , , , , , ,0.000000000000000,1,"-43.278688469099997","-22.890070246099999"
2446032,"669591,392118294","7467756,59464924",2,A90,15/01/2010 00:00:00,201002,2010,33,2296608,09/01/2010 00:00:00,201001,9,09, , ,15/01/2010 00:00:00,,,16/11/2010 00:00:00,,,,2010041, , , , , ,15/01/2010 00:00:00, ,,4,, ,,4,,4, ,4,4,9, , , , , , , , , ,0.000000000000000,1,"-43.347090180499997","-22.889919181600000"

In order to parse this skipping the first line (which I don't know why it was placed there, but there's nothing I can do), I did:

val csv = sc.textFile("./project/Casos_Notificados_Dengue_01_2010.csv")

val rdd = csv.mapPartitionsWithIndex(
    (i,iterator) => if (i == 0 && iterator.hasNext){
      iterator.next
      iterator
    }else iterator)

And using rdd.foreach(x => println(x.toString + "\n" )) to check whether the rdd was okay or not. Unfortunately, it's getting a random line as the first line instead of the header (which I assume it should be the first line, right?).

So, the result is something like this:

2258026,"685693,42","7458369,42",2,A90,27/01/2010 00:00:00,201004,2010,33,3005992,25/01/2010 00:00:00,201004,9,09, , ,27/04/2010 00:00:00,,,07/12/2010 00:00:00,,,,2010049, , , , , ,, ,, ,, ,, ,, , , , , , , , , , , , , , ,0.000000000000000,1,"-43.189041385899998","-22.972965925200000"

NU_NOTIF,CoordX_UTMSAD69,CoordY_UTMSAD69,TP_NOT,ID_AGRAVO,DT_NOTIFIC,SEM_NOT,NU_ANO,SG_UF_NOT,ID_UNIDADE,DT_SIN_PRI,SEM_PRI,CS_RACA,CS_ESCOL_N,ID_CNS_SUS,NDUPLIC_N,DT_DIGITA,DT_TRANSUS,DT_TRANSDM,DT_TRANSSM,DT_TRANSRM,DT_TRANSRS,DT_TRANSSE,NU_LOTE_V,NU_LOTE_H,CS_FLXRET,FLXRECEBI,IDENT_MICR,MIGRADO_W,DT_INVEST,ID_OCUPA_N,DT_SORO,RESUL_SORO,DT_NS1,RESUL_NS1,DT_VIRAL,RESUL_VI_N,DT_PCR,RESUL_PCR_,SOROTIPO,HISTOPA_N,IMUNOH_N,DOENCA_TRA,EPISTAXE,GENGIVO,METRO,PETEQUIAS,HEMATURA,SANGRAM,LACO_N,PLASMATICO,EVIDENCIA,PLAQ_MENOR,TP_SISTEMA,Long_WGS84,Lat_WGS84

2258019,"686278,41","7459234,58",2,A90,18/01/2010 00:00:00,201003,2010,33,3005992,16/01/2010 00:00:00,201002,9,09, , ,22/01/2010 00:00:00,,,16/11/2010 00:00:00,,,,2010041, , , , , ,, ,, ,, ,, ,, , , , , , , , , , , , , , ,0.000000000000000,1,"-43.183441365699998","-22.965089100099998"

2332769,"677873,18","7468220,51",2,A90,29/01/2010 00:00:00,201004,2010,33,2273225,11/01/2010 00:00:00,201002,9,03, , ,26/02/2010 00:00:00,,,16/11/2010 00:00:00,,,,2010041, , , , , ,29/01/2010 00:00:00, ,18/01/2010 00:00:00,1,, ,,4,,4, ,4,4,2, , , , , , , , , ,0.000000000000000,1,"-43.

Does anyone know how to put the header on the first line? Also, is there any way to get just some columns of the csv using the mapPartitionsWithIndex?

EDIT 1

As @user3712791 stated, It lacked a true after the `}else iterator), so, for now it's going well.

val csv = sc.textFile("./project/Casos_Notificados_Dengue_01_2010.csv")

val rdd = csv.mapPartitionsWithIndex(
    (i,iterator) => if (i == 0 && iterator.hasNext){
      iterator.next
      iterator
    }else iterator),true)

@Paul, I misunderstood what mapPartitionsWithIndex does. I thought it did like a key-values with the header and the data (rows above the header).

I believe now I have to do a groupBy to achieve this or is there any other better idea to do so?

(I have to do this because I just need 5 columns from the data)

by PhaSath at May 22, 2015 12:16 AM

Lobsters

iOS security guide (8.3)

Exactly what it says on the tin. Surprisingly technical whitepaper of iOS' defence mechanisms.

Comments

by calvin at May 22, 2015 12:08 AM

infra-talk

CD for infrastruture services

For the last 6 months I've been consulting on a project to build a monitoring metrics storage service to store several hundred thousand metrics that are updated every ten seconds. We decided to build the service in a way that could be continuously deployed and use as many existing Open Source tools as possible.

There is a growing body of evidence to show that continuously deploying applications lowers defect rates and improves software quality. However, the significant corpus of literature and talks on continuous delivery and deployment is primarily focused on applications - there is scant information available on applying these CD principals to the work that infrastructure engineers do every day.

Through the process of building a monitoring service with a continous deployment mindset, we've learnt quite a bit about how to structure infrastructure services so they can be delivered and deployed continuously. In this article we'll look at some of the principals you can apply to your infrastructure to start delivering it continuously.

How to CD your infrastructure successfully

There are two key principals for doing CD with infrastructure services successfully:

  1. Optimise for fast feedback. This is essential for quickly validating your changes match the business requirements, and eliminating technical debt and sunk cost before it spirals out of control.
  2. Chunk your changes. A CD mindset forces you to think about creating the shortest and smoothest path to production for changes to go live. Anyone who has worked on public facing systems knows that many big changes made at once rarely result in happy times for anyone involved. Delivering infrastructure services continuously doesn't absolve you from good operational practice - in fact it creates a structure that re-inforces such practice.

Definitions

  • Continously Delivery is different from Continuously Deployment in that in Continuous Delivery there is some sort of human intevention required to promote a change from one stage of the pipeline to the next. In Continuous Deployment no such breakpoint exists - changes are promoted automatically. The speed of Continuous Deployment comes at the cost of potentially pushing a breaking change live. Most discussion of "CD" rarely qualifies the terms.
  • An infrastructure service is a configuration of software and data that is consumed by other software - not by end users themselves. Think of them as “the gears of the internet”. Examples of infrastructure services include DNS, databases, Continuous Integration systems, or monitoring.

What the pipeline looks like

  1. Push. An engineer makes a change to the service configuration and pushes it to a repository. There may be ceremony around how the changes are reviewed, or they could be pushed directly into master.
  2. Detect and trigger. The CI system detects the change and triggers a build. This can be through polling the repository regularly, or a hosted version control system (like GitHub) may call out via a webhook.
  3. Build artifacts. The build sets up dependencies and builds any required software artifacts that will be deployed later.
  4. Build infrastructure. The build talks to an IaaS service to build the necessary network, storage, compute, and load balancing infrastructure. The IaaS service may be run by another team within the business, or an external provider like AWS.
  5. Orchestrate infrastructure. The build uses some sort of configuration management tool to string the provisioned infrastructure together to provide the service.

There is a testing step between almost all of these steps. Automated verification of the changes about to be deployed and the state of the running service after the deployment is crucial to doing CD effectively. Without it, CD is just a framework for continuously shooting yourself in the foot faster and not learning to stop. You will fail if you don't build feedback into every step of your CD pipeline.

Defining the service for quality feedback

  • Decide what guarantees you are providing to your users. A good starting point for thinking about about what those guarantees should be is the CAP theorem. Decide if the service you're building is an AP or CP system. Infrastructure services generally tend towards AP, but there are cases where CP is preferred (e.g. databases).
  • Define your SLAs. This is where you quantify the guarantees you've just made to your users. These SLAs will relate to service throughput, availability, and data consistency (note the overlap with CAP theorem). 95e response time for monitoring metric queries in a one hour window is < 1 second, and a single storage node failure does not result in graph unavailability are examples of SLAs.
  • Codify your SLAs as tests and checks. Once you've quantified your guarantees SLAs, this is how you get automated feedback throughout your pipeline. These tests must be executed while you're making changes. Use your discretion as to if you run all of the tests after every change, or a subset.
  • Define clear interfaces. It's extremely rare you have a service that is one monolithic component that does everything. Infrastructure services are made of multiple moving parts that work together to provide the service, e.g. multiple PowerDNS instances fronting a MySQL cluster. Having clear, well defined interfaces are important for verifying expected interactions between parts before and after changes, as well as during the normal operation of the service.
  • Know your data. Understanding where the data lives in your service is vital to understanding how failures will cascade throughout your service when one part fails. Relentlessly eliminate state within your service by pushing it to one place and front access with horizontally scalable immutable parts. Your immutable infrastructure is then just a stateless application.

Making it fast

Getting iteration times down is the most important goal for achieving fast feedback. From pushing a change to version control to having the change live should take less than 5 minutes (excluding cases where you've gotta build compute resources). Track execution time on individual stages in your pipeline with time(1), logged out to your CI job's output. Analyse this data to determine the min, max, median and 95e execution time for each stage. Identify what steps are taking the longest and optimise them.

Get your CI system close to the action. One nasty aspect of working with infrastructure services is the latency between where you are making changes from, and the where the service you're making changes to is hosted. By moving your CI system into the same point of presence as the service, you minimise latency between the systems.

This is especially important when you're interacting with an IaaS API to inventory compute or storage resources at the beginning of a build. Before you can act on any compute resources to install packages or change configuration files you need to ensure those compute resources exist, either by building up an inventory of them or creating them and adding them to said inventory.

Every time your CD runs it has to talk to your IaaS provider to do these three steps:

  1. Does the thing exist?
  2. Maybe make a change to create the thing
  3. Get info about the thing

Each of these steps requires sending and recieving often non-trivial amounts of data that will be affected by network and processing latency.

By moving your CI close to the IaaS API, you get a significant boost in run time performance. By doing this on the monitoring metrics storage project we reduced the CD pipeline build time from 20 minutes to 5 minutes.

Push all your changes through CI. It's tempting when starting out your CD efforts to push some changes through the pipeline, but still make ad-hoc changes outside the pipeline, say from your local machine.

This results in several problems:

  • You don't receive the latency reducing benefits of having your CI system close to the infrastructure.
  • You limit visibility to other people in your team as to what changes have actually been made to the service. That quick fix you pushed from your local machine might contribute to a future failure that your colleagues will have no idea about. The team as a whole benefits from having an authoriative log of all changes made.
  • You end up with divergent processes - one for ad-hoc changes and another for Real Changesâ„¢. Now you're optimising two processes, and those optimisations will likely clobber one another. Have fun.
  • You reduce your confidence that changes made in one environment will apply cleanly to another. If you're pushing changes through multiple environments before they are applied to your production environment, you reduce the certainty that one off changes in one environment won't cause changes to pass there but fail elsewhere.

There's no point in lying: pushing all changes through CI is hard but worth it. It requires thinking about changes differently and embracing a different way of working.

The biggest initial pushback you'll probably get is having to context switch between your terminal where you're making changes and the web browser where you're tracking the CI system output. This context switch sounds trivial but I dare you to try it for a few hours and not feel like you're working more slowly.

Netflix Skunkworks' jenkins-cli is an absolutely godsend here - it allows you to start, stop, and tail jobs from your command line. Your workflow for making changes now looks something like this:

git push && jenkins start $job && jenkins tail $job

The tail is the real killer feature here - you get the console output from Jenkins on your command line without the need to switch away to your browser.

Chunking your changes

Change one, test one is a really important way of thinking about how to apply changes so they are more verifiable. When starting out CD the easiest path is to make all your changes and then test them straight away, e.g.

  • Change app
  • Change database
  • Change proxy
  • Test app
  • Test database
  • Test proxy

What happens when your changes cause multiple tests to fail? You're faced with having to debug multiple moving parts without solid information on what is contributing to the failure.

There's a very simple solution to this problem - and test immediately after you make changes:

  • Change app
  • Test app
  • Change database
  • Test database
  • Change proxy
  • Test proxy

When you make changes to the app that fail the tests, you'll get fast feedback and automatically abort all the other changes until you debug and fix the problem in the app layer.

If you were applying changes by hand you would likely be doing something like this anyway, so encode that good practice into your CD pipeline.

Tests must finish quickly. If you've worked on a code base with good test coverage you'll know that slow tests are a huge productivity killer. Exactly the same here - the tests should be a help not a hinderance. Aim to keep each test executing in under 10 seconds, preferably under 5 seconds.

This means you must make compromises in what you test. Test for really obvious things like “Is the service running?”, “Can I do a simple query?”, “Are there any obviously bad log messages?”. You'll likely see the crossover here with "traditional" monitoring checks. You know, those ones railed against as being bad practice because they don't sufficiently exercise the entire stack.

In this case, they are a pretty good indication your change has broken something. Aim for "good enough" fast coverage in your CD pipeline which complements your longer running monitoring checks to verify things like end-to-end behaviour.

Serverspec is your friend for quickly writing tests for your infrastructure.

Make the feedback visual. The raw data is cool, but graphs are better. If you're doing a simple threshold check and you're using something like Librato or Datadog, link to a dashboard.

If you want to take your visualisation to the next level, use gnuplot's dumb terminal output to graph metrics on the command line:

  1480 ++---------------+----------------+----------------+---------------**
       +                +                +                + ************** +
  1460 ++                                            *******              ##
       |                                      *******                 #### |
  1440 ++                    *****************                 #######    ++
       |                  ***                                ##            |
  1420 *******************                                  #             ++
       |                                                   #               |
  1400 ++                                                ##               ++
       |                                             ####                  |
       |                                          ###                      |
  1380 ++                                      ###                        ++
       |                                     ##                            |
  1360 ++                               #####                             ++
       |                            ####                                   |
  1340 ++                    #######                                      ++
       |                  ###                                              |
  1320 ++          #######                                                ++
       ############     +                +                +                +
  1300 ++---------------+----------------+----------------+---------------++
       0                5                10               15               20


CRITICAL: Deviation (116.55) is greater than maximum allowed (100.00)

Conclusion

CD of infrastructure services is possible provided you stick to the two guiding principals:

  1. Optimise for fast feedback.
  2. Chunk your changes.

Focus on constantly identifying and eliminating bottlenecks in your CD pipeline to get your iteration time down.

by Lindsay Holmwood at May 22, 2015 12:00 AM

HN Daily

May 21, 2015

StackOverflow

Treat Type Parameter As A Numeric Using Implicits

I'm having trouble with the following polymorphic function definition:

scala> def foo[A](x : A, y : A)(implicit o : A => Numeric[A]) : A = x + y
<console>:7: error: type mismatch;
 found   : A
 required: String
       def foo[A](x : A, y : A)(implicit o : A => Numeric[A]) : A = x + y

I would like to specify that the type parameter A can be used as a Numeric but it's not working. What am I doing wrong?

by I.K. at May 21, 2015 11:58 PM

/r/netsec

Planet Theory

An Intentional and an Unintentional teaching experiment regarding proving the number of primes is infinite.


I taught Discrete Math Honors this semester. Two of the days were cancelled entirely because of snow (the entire school was closed) and four more I couldn't make because of health issues (I'm fine now). People DID sub for me those two and DID do what I would have done. I covered some crypto which I had not done in the past.

Because of all of this I ended up not covering the proof that the primes were infinite until the last week.

INTENTIONAL EXPERIMENT: Rather than phrase it as a proof by contradiction I phrased it, as I think Euclid did, as

Given primes p1,p2,...,pn you can find a prime NOT on the list. (From this it easily follows that the primes are infinite.)

Proof: the usual one, look at p1xp2x...xpn + 1 and either its prime or it has a prime factor not on the list.

The nice thing about doing it this way is that there are EASY examples where p1xp2x...xpn+1 is NOT prime

(e.g., the list is {2,5,11} yields 2x5x11 + 1 = 111 = 3 x 37, so 3 and 37 are both not in {2,5,11})


where as if you always use the the product of the first n primes then add 1, you don't get to a non-prime until 2x3x5x7x11x13 + 1 = 30031 = 59x 509.

They understood the proof better than prior classes had, even prior honors classes.

UNINTENTIONAL: Since I did the proof at the end of the semester they ALREADY had some proof maturity, more so than had I done it (as I usually do) about 1/3 of the way through the course.

They understood the proof better than prior classes had, even prior honors classes. Hence I should proof all of the theorems the last week! :-)

But seriously, they did understand it better, but I don't know which of the two factors, or what combination caused it. Oh well.


by GASARCH (noreply@blogger.com) at May 21, 2015 11:56 PM

CompsciOverflow

What are the main ideas used in a Fenwick tree?

While I was solving a programming competition question, I came across a technique advertised as suitable for the solution. A so called Fenwick tree (a.k.a binary indexed tree) was at the heart of this technique. So in an effort to understand Fenwick trees, I researched all online sources in vain because reading them in details is a pain!

No one seems to be able to explain the main concepts behind this data structure in a concise and clear way. Can anyone please give a high level exposition of the main ideas related to Fenwick trees without going into unnecessary details?

by user698585 at May 21, 2015 11:40 PM

/r/netsec

CompsciOverflow

Are there Petri Net diagrams worth adding "new lives"? [on hold]

I would like to add “new lives” to Petri Net diagrams that were published between 1962 – 2014. The Petri Nets with “new lives” would combine the published Petri Net diagrams from the past with JavaScript codes and supporting graphics to create interactive and dynamic Petri Nets in PDF. In other words, the revival creates token game versions of the Petri Net diagrams in PDF.

I am limiting the number of Petri Nets to a maximum of two per year.

Question 1: Which Petri Net diagrams should I include? Why should I include them?

Once I have finalized the list of Petri Net diagrams to revive, I am hoping to finish the work as soon as possible. So before I begin, I will be looking for volunteers who are interested in adding “new lives” to the Petri Net diagrams. I will create at least two token game versions. Thus I will be looking for a maximum of 102 volunteers – one person per token game version. If there are less than 102, I will create the difference.

Question 2: Would you be interested in helping out? If so, please give me a shout.

  • john

by John Frederick Chionglo at May 21, 2015 11:24 PM

StackOverflow

Has anyone compiled FreeBSD with LTO-capable linker?

Has anyone enabled the libLTO when compiling the FreeBSD kernel(in order to compute a whole-program call graph). I want to compile the FreeBSD kernel using the libTO tool from the llvm/clang compiler suite. If anyone has previously done this work then can anyone show me how it is done or how to proceed to do it?

by matrixaliser at May 21, 2015 11:15 PM

Side effects not realized without deref

From clojure for the brave and true:

(defmacro enqueue
  [q concurrent-promise-name & work]
  (let [concurrent (butlast work)
        serialized (last work)]
    `(let [~concurrent-promise-name (promise)]
       (future (deliver ~concurrent-promise-name (do ~@concurrent)))
       (deref ~q)
       ~serialized
       ~concurrent-promise-name)))
(defmacro wait
  "Sleep `timeout` seconds before evaluating body"
  [timeout & body]
  `(do (Thread/sleep ~timeout) ~@body))
(time @(-> (future (wait 200 (println "'Ello, gov'na!")))
           (enqueue saying (wait 400 "Pip pip!") (println @saying))
           (enqueue saying (wait 100 "Cheerio!") (println @saying))))

If I comment out the (deref ~q) line, then only "Cheerio!" is printed. Why do I need deref here to get other side effects?

by qed at May 21, 2015 11:05 PM

Fefe

Gruseltechnologie des Tages:Was sie geschafft haben: ...

Gruseltechnologie des Tages:
Was sie geschafft haben: anhand einer Sprachaufzeichnung von wenigen Minuten Dauer ein Persönlichkeitsprofil eines Menschen zu erstellen, das zu neunzig Prozent an das herankommt, was Psychologen mit verschiedenen Testverfahren in tagelanger Arbeit herausfinden, wenn sie diesen Menschen nach allen Regeln der Kunst auseinandernehmen.
Ich weiß, was Sie jetzt denken, und Sie haben Recht! Yeah! Wir automatisieren die Psychologen weg!

Aber was, wenn wirklich jemand dieser Computeresoterik traut? Schaut mal, wie diese Software angeblich funktioniert:

Die einzelnen Abschnitte in meiner Auswertung tragen Überschriften wie: „Wortverwendungshäufigkeiten“, „Verwendung von Wortarten“, „Ihr persönliches Sprachprofil“, „Eigene Antriebsquellen“, „Typische Persönlichkeitseigenschaften“ oder „Widerstandskraft bei Belastungen“.
Oh, ach? Das lest ihr alles aus einem 15-minütigen Sprachsample heraus? Darf ich bei der Gelegenheit an Spurious Correlations erinnern?
Er sagt: „Ihre innere Verfassung ist gut, das sehen wir an den Sprach- und Stimmeigenschaften.“
"In Ihrer Zukunft sehe ich eine Reise und eine erfüllende Beziehung!"
Er sagt: „Sie waren nicht aufgeregt, als Sie die Telefonfragen beantwortet haben, das sehen wir an der Perturbation der Stimme.“
Da braucht ihr Software für?! Und: Welche Rolle spielt denn das? Ist das nicht normal, wenn man aufgeregt ist, wenn jemand eine magische Software an einem ausprobieren will?
Er sagt: „Sie sagen selten ,man‘, das heißt, Sie sind verbindlich, klar, emotional und personenorientiert.“
"Ihre tote Mutter sagt, sie vermisst Sie sehr!"
Er sagt auch ein paar Sachen, die weniger schmeichelhaft sind. Aber fast immer liegt er in meinen Augen richtig. Als ich jedoch bei einem Messergebnis sage, dass ich nicht glaube, dass es zutreffend sei, entgegnet er: „Wir messen ja nicht, was Sie über sich denken, sondern wie Sie im Vergleich zur Allgemeinheit der Probanden sind. Das ist viel objektiver.“
Ach sooooo! Na dann!

Das klingt alles wie Jahrmarkttricks für mich. "Vertrauen Sie dem Computer, der Computer weiß, was er tut!"

Und wie Opfer auf dem Jahrmarkt fallen auch hier die Opfer bereitwillig rein und basteln sich noch selber die Ausflüchte, wieso sie Schuld sind und nicht die Maschine.

Die Software dringt nämlich in Bereiche der Psyche vor, mit denen sich der betreffende Mensch noch gar nicht beschäftigt hat. „Ein Viertel bis ein Drittel der Erkenntnisse, die die Maschine generiert, ist für die untersuchte Person neu“, erzählt Gratzel.
Könnte das daran liegen, dass die "Erkenntnisse" wenig Fundament in der Realität haben?

May 21, 2015 11:01 PM

Planet Clojure

TheoryOverflow

Relativized world where $L^A=NP^A$

I wonder1 whether there is a known relativization barrier against proving $L\neq NP$. Hence I'm looking for a language $A$ for which $L^A=NP^A$.

My first idea was to try $A:=SAT$, but then I thought that $L^A\subset P^A = \Delta_2^P$ and $NP^A=\Sigma_2^P$. This seems to disqualify not only $A:=SAT$, but any complete problem $A$ from the polynomial hierarchy for my purpose.

My next idea was to try $A:=TQBF$, where $TQBF$ is the $PSPACE$-complete problem to decide true quantified Boolean formulas. But $P^A=NP^A$ is well known, and $L^A=NP^A$ is a stronger statement, so $L^A=NP^A$ would be well known, if anybody had proved it.

My question is just whether there is a known relativization barrier. We certainly can't prove that such a language $A$ can't exist, because otherwise we would get $L\neq NP$ as corollary.


1. I know that logarithmically space bounded TMs with stack recognize exactly the languages from $P$, independent of whether the TM is deterministic or not. I don't know whether this result relativizes, but I guess it does. So I wonder whether there can be a language $A$ with $NL^A=P^A$. But asking for $L^A=NP^A$ instead seems easier, because of the connection to the polynomial hierarchy.

by Thomas Klimpel at May 21, 2015 10:48 PM

StackOverflow

How do I enable continuations in Scala?

The details of how to get access to the shift and reset operations has changed over the years. Old blog entries and Stack Overflow answers may have out of date information.

See also What are Scala continuations and why use them? which talks about what you might want to do with shift and reset once you have them.

by Seth Tisue at May 21, 2015 10:41 PM

UnixOverflow

How to check if Nginx + Ruby on Rails web application is down?

I am very new to *nix system. I have below setup:

  • OS - Ubuntu 14.14
  • Web Server - Nginx with Passenger
  • Web Application Ruby on Rails web application

Please tell me what tools to use to achieve:

  • Continually check if web application is running or not and if the application is down send email to our sysadmin.

by AntonIva at May 21, 2015 10:39 PM

Lobsters

QuantOverflow

Discounted Stock Price

I have the following Question :

Prove that under the risk-neutral probability p the stock and the banjaccount have the same average rate of growth. In other words, if $ S_0 , S_N $ are the initial and final stock prices and $B_0 , B_N $ the initial and final bank prices , show that :

$$ E[S_N / S_0 ] = E[B_N / B_0 ] = c $$

Hint : The discounted stock price is a martingale under P.

Could you explain to me what is the discounted stock price ?

by user2505650 at May 21, 2015 10:38 PM

TheoryOverflow

Defintion of a Data Structure?

Lately I have been looking around for a formal definition of a what a data structure is. I cannot find neither a paper, nor a book with such a definition. Even the famous "The Art of Computer Programming" is missing one. Even for just a linked data structure, e.g., binary search tree, I couldn't find one.

So, is there one? Perhaps a paper that is talking about it?

By formal definition I'm thinking of something like "A data structure is a tuple (Op, El, ..) where ...".

by equality at May 21, 2015 10:35 PM

CompsciOverflow

How did NASA remotely fix the code on the Mars Pathfinder? [migrated]

In 1997, NASA remotely fixed a bug that caused priority inversion on their Mars Pathfinder. How did they go about doing this? What kind of communication protocols are used? How do they update the source for an operating system, compile it, and run it from a remote location? This might be simpler than I thought, but to me this seems like quite the feat!

Story of the bugfix here: http://research.microsoft.com/en-us/um/people/mbj/mars_pathfinder/authoritative_account.html

The author said to email him and he would provide details, but this was almost 20 years ago. Curious to see if anyone else knows how this worked.

by kkopsa at May 21, 2015 10:10 PM

StackOverflow

A little confusion with the sorted function in Haskell

sorted :: Ord a => [a] -> Bool
sorted xs = and [x <= y | (x,y) <- pairs xs]

Can anyone explain to me what this random and is doing after =? It works when I compile it but it doesn't make logical sense to me. Is it because Haskell works recursively and it uses the and to compare the next item?

Any insight is highly appreciated.

by eyes enberg at May 21, 2015 10:07 PM

Introducing variables named by user in a macro

So I want to define a macro that introduces a couple of variables as variable declarations such that the names are defined by the user of the macro.

Say that I want to be able to write something like the following.

foldOver(0)(hd, rest)(List(1, 2, 3)) {
  hd + rest
}

My first thought on how to do this was to make the variables parameters and manually inspect them in some way. I wound up toying with a definition like the following.

def foldOver[A, B](z : B)(next : A, acc : B)(lst : List[A])(f : B): B = macro foldOverImpl[A, B]
def foldOverImpl[A : c.WeakTypeTag, B : c.WeakTypeTag]
                (c: Context)
                (z : c.Expr[B])(next : c.Expr[A], acc : c.Expr[B])
                (lst : c.Expr[List[A]])
                (f : c.Expr[B]): c.Expr[B] = {
    import c.universe._
    (next, acc) match {
      case (TermName(nextName), TermName(accName)) => {
        c.Expr[B](q"""{
          <do whatever here>
        }""")
      }
      case otherwise => {
        throw new Exception("the 'next' and 'acc' variables must be term names")
      }
    }
}

but when I use the macro as shown at the top that gives me an error saying that it can't find those variable names in the program. Well it shouldn't, I want those to be declared inside the macro and not the containing context.

So, is there a way to receive names from users and use those to declare variables within a macro?

by Jake at May 21, 2015 10:02 PM

Fefe

Kennt ihr den schon?Bei dessen Arbeit sei es wegen ...

Kennt ihr den schon?
Bei dessen Arbeit sei es wegen der problematischen V-Mann-Dichte in der Neonazi-Szene mitunter zu "skurrilen" Situationen gekommen, heißt es in dem Bericht des Sonderermittlers: So hätte sich Corelli einmal mit den damaligen Klan-Anführer in einem Chat über zwei weitere Rechtsextremisten ausgetauscht. Was sie wohl nicht wussten: Alle vier Personen waren V-Leute.
Das wäre noch viel witziger, wenn die Überschrift nicht "Staat zahlte V-Mann fast 300.000 Euro" wäre.

May 21, 2015 10:01 PM

Planet Emacsen

(or emacs: New on MELPA - define word at point

Doing things in Emacs superlatively better than having to switch to another application.

In this case, "doing things" is getting the dictionary definition of word at point, and "superlatively" is a word that I didn't know - a straw that broke the camel's back and caused me to finally automate the process of getting the definition of a word that I encounter in an Emacs buffer.

The whole process of writing the define-word package took around 30 minutes, I just had to:

  • See which engine DuckDuckGo uses.
  • Follow to wordnik.
  • Try to get an API key, read their draconian TOS and decide that I don't want to agree to it just to get their key.
  • Examine the HTML that it returns and note that it's quite regular.
  • Write a 10 line function with re-search-forward to extract the word definitions from a sample page that I saved with wget.

Then just wrap the function in an url-retrieve and done. It's a good thing that I learned to use url-retrieve when I wrote org-download.

Here's how it looks like in action, the word under point is "Authors" and instead of visiting this page, you can see it right away in your Echo Area:

demo

The result is displayed simply with message, so it doesn't mess with your window config. You read it, press any key and the Echo Area popup will vanish automatically.

Install the package from MELPA or check it out at github. You just need to decide where to bind it:

(global-set-key (kbd "C-c d") 'define-word-at-point)
(global-set-key (kbd "C-c D") 'define-word)

At less than 50 lines, the source is very easy to understand. So if you're looking to write some Elisp that retrieves and parses some HTML from a web service, it's nice to look at a simple implementation of how it's done.

by (or emacs at May 21, 2015 10:00 PM

QuantOverflow

How to test if two portfolios have the same composition

I'm facing two different portfolios in CAPM framework derived as $$\omega_P=\Sigma^{-1}\frac{E(r)-r_f}{H}(\mu-\iota'r_f)$$ on the same assets but for example on different time sample or on the same assets but one, where $H=(\mu-\iota'r_f)\Sigma^{-1}(\mu-\iota'r_f)$ and $\iota$ is a vector of ones.

I'd like to test (if possible) whether the weights on the two portfolio are statistically equal or not. I was thinking I need the distribution of the weights and proceed as usual for hypothesis testing but weights look to be deterministic quantities so they do not have a distribution.

How can I proceed?

by Marco at May 21, 2015 09:40 PM

/r/osdev

Some of the OSDev.org wiki tutorials are really bad

I tried to learn OS development using these tutorials maybe. 5 or 15 years ago. I just moved on to other things because didn't got hold of it all.

I read on these pages yesterday. Having read lots of programming material, I immediately spot they're rubbish.

Now lots of the material has improved a lot. It's possible you'll find enough information in digestible form to OS development. ..but the old stuff is remaining in the web too.

The summaries for different formats can be all right, but for most of the articles in the OSDev wiki, I'd say don't bother with them. Find a better resource if you can.

To get an idea of how it sucks. We could study at this GDT Tutorial. It's not unusual in how it's built when compared to other 'instructions' on the website. The problem is it's in the recipe format. It doesn't tell or point to a resource describing what is the GDT or why you need the segments it lists. Like seven pairs of high heels, you need them. Article doesn't take into account whether you're a boy or girl or raiden from MGS2 when asserting it.

The error I did when learning last time, was to assume that ordinary programming rules and mechanisms doesn't apply when you're going into kernel or assembly programming. I did that error because of an emphasis that it's something harder than userspace programming, or illegal somehow. Sentiments of "good luck" from bozos doesn't make it hard. Their misinformation in other hand can very well do that.

When OSDev wiki (or any wiki really, or 4chan) tells you to do something, you should triple check why. If it's not telling you, it's probably wrong, harmful or out of context for you.

Of course it's wiki these days. Maybe it could be improved. I'm not sure how the author would respond if they were modified. The stuff is quite old and unchanged.

submitted by htuhola
[link] [11 comments]

May 21, 2015 09:39 PM

Lobsters

StackOverflow

Tuple matching in Scala: 'match' failing to match returned tuple (Future, Json)

I've written a utility class to help with unit testing. One function, fakeRequest(), will process a request, parse the response, validate the JSON, and make sure things went well in general. It then returns a tuple of (response, parsedJSON) to the caller. If things go poorly, it returns a failure.

This all works fine, but I can't figure out how to match on the returned tuple.

What I want: 1. If I get (response, parsedJSON) validate the response; 2. If I get failure, fail the test case.

My fakeRequest() looks like this:

def fakeRequest[A, B](target: () => Call, request: A, response: B)(implicit f1: Format[A], f2: Format[B]) = {
    route(FakeRequest(target()).withJsonBody(Json.toJson(request))) match {
        case Some(response) => {
            contentType(response) must beSome("application/json")
            charset(response) must beSome("utf-8")

            GPLog(s"fakeRequest: HTTP status ${status(response)}: JSON response: ${contentAsString(response)}")
            val parsedJSON = Json.parse(contentAsString(response)).validate[B]

            (response, parsedJSON)
        }
        case _ => failure
    }
}

And this is how I'm trying to match on the response. I've tried a couple dozen variations and can't get it to match on a response:

fakeRequest(controllers.routes.GPInviteService.invite, test, GPInviteService.InviteResponse(true, None, None)) match {
    case (response, JsSuccess(invite: GPInviteService.InviteResponse, _)) => { // Never matches
        status(response) must beEqualTo(UNPROCESSABLE_ENTITY)
        invite.inviteSent must beFalse
        invite.error.get.code must beEqualTo(GPError.ValidationFailed.code)
    }
    case e: JsError => failure
    ...

This invariably does not work. However, and this is what is really confusing me, I can return the response status code, just not the response object itself. E.g., if I change fakeRequest as such:

def fakeRequest[A, B](target: () => Call, request: A, response: B)(implicit f1: Format[A], f2: Format[B]) = {
    route(FakeRequest(target()).withJsonBody(Json.toJson(request))) match {
        case Some(response) => {
            // ...
            (status(response), parsedJSON) // return status, not response?
        }
        case _ => failure
    }
}

I can test against the status just fine, and everything works as expected:

fakeRequest(controllers.routes.GPInviteService.invite, test, GPInviteService.InviteResponse(true, None, None)) match {
    case (status, JsSuccess(response: GPInviteService.InviteResponse, _)) => { // Matches!
        status must beEqualTo(UNPROCESSABLE_ENTITY)
        response.inviteSent must beFalse
        response.error.get.code must beEqualTo(GPError.ValidationFailed.code)
    }
    case e: JsError => failure
}

I'd love to understand why I can match a simple status code, but I can't match a more complex object.

FYI, I even tried case (a, b) => println(a) just to see what would happen. It didn't match when returning the response.

by Zac at May 21, 2015 09:24 PM

CompsciOverflow

Arthur-Merlin protocol to decide a set size

Please look at the example here at the bottom of page 3, http://www.cs.nyu.edu/~khot/CSCI-GA.3350-001-2014/sol3.pdf

  • Here it seems that the set whose size Arthur is trying to approximate is known in a very implicit way i.e the number of satisfying assignments of a SAT formula. I guess that the issue here is that Arthur on his own cannot just find out this set with his polynomial resources.

    Are all set lower bounding protocols always working under this assumption that somehow the set is so implicit that Arthur can't on his own calculate even the elements of the set? (or else when asked if the set has at least $k$ elements, Arthur could have just counted the elements of the set till $k$ and decided without the need of Merlin - right?)

  • Is that $k+2$ a typo when one says that Arthur picks a hash function mapping $\{0,1\}^n \rightarrow \{0,1\}^{k+2}$ ?

  • What motivates this factor of $2^{100}$? Could that be anything else?

  • Can someone help derive the inequality at the last step i.e

    $Pr_{A,b} [y \in Im(S)] \geq \sum_{x \in S} Pr[ f(x) = y ] - \sum_{x_1,x_2 \in S} Pr[ f(x_1) = f(x_2) = y ]$

?

by user6818 at May 21, 2015 09:00 PM

StackOverflow

When to use Ask pattern in Akka

I'm started to learn Akka and in many official examples I see that request-response implemented using tell pattern. I.e. after worker made his work he sends result as new message back to sender. For example in this Pi approximation official tutorial shown how to design application where Master sends some work to workers and then awaits for results as another message.

Master code:

def receive = {
  case Calculate ⇒
    for (i ← 0 until nrOfMessages) workerRouter ! Work(i * nrOfElements, nrOfElements)
  case Result(value) ⇒
    pi += value
    nrOfResults += 1
    if (nrOfResults == nrOfMessages) {
      // Send the result to the listener
      listener ! PiApproximation(pi, duration = (System.currentTimeMillis - start).millis)
      // Stops this actor and all its supervised children
      context.stop(self)
    }
}

Worker code:

 def receive = {
    case Work(start, nrOfElements) ⇒
      sender ! Result(calculatePiFor(start, nrOfElements)) // perform the work
  }

But I'm wondering why this example didn't use ask pattern? What is wrong with using ask pattern here?

If it's ok to use ask pattern here, then I have another question: how I can stop all worker actors after work is done?

  1. Should my worker actors send PoisonPill message to themselves?
  2. Or should Master actor Broadcast(PoisonPill)?
  3. Or there some another more elegant way?

by MyTitle at May 21, 2015 08:47 PM

CompsciOverflow

Show that regular languages are closed under $\text{2-part}(L)$

Let the operation: $$\text{2-part}(L) = \{ w : ww \in L \}$$

I need to show that the regular languages are closed under this operation. A word $ww\in L$ if there's a state $q$ such that $\hat\delta(q_0, w) = q$ and $\hat\delta(q, w) = q'$ where $q'\in F$.

So the NFA needs somehow to "guess" this $q$ state. How do I implement this kind of NFA?

Thanks.

by Elimination at May 21, 2015 08:40 PM

TheoryOverflow

Shortest path in DAG with path dependent arc costs

I've got the following problem

Consider a DAG $G=(V,E)$ with $V=[v_1,…,v_n]$, and edge-set $E=[e_1,…,e_m]$, with associated costs $c_1,…,c_m$. The problem is to find the shortest paths from an initial vertex $s$ to multiple targets $t_1,…,t_k$, taking into account these costs. A typical shortest path problem.

However, my problem is slightly different: the costs $c_1,…,c_m$ depend on the previously traversed nodes Is there an alternative to the brute-force solution: find all the simple paths between $s$ and $t$ and then select the one with the lowest cost?

Path can be compared. Take as example the following graph: Red arcs are "checkpoint arcs". All the sub-paths having a red arc as last one can be compared, so a local decision can be taken. Similarly, costs are reset after traversing such arcs: 22->23 has a different cost if the subpath includes the arc 11->22 or not.

enter image description here

EDIT 2:

Costs are related to two sequences: $P$ and $Q$, where $len(P)=N$ and $len(Q)=M$. In the aforementioned case $M=N=4$. Each element of $P$ and $Q$ is a tuple $(t,p)$. The tuple ($t_i,p_i)$ is the ith tuple of the sequence. Tuples are constrained on $t$: $t_(i+1) > t_i$. There's no constraint on $p$.

Let $s_p$ a set of valid indices for the sequence $P$ and $s_qp$ a set of valid indices for the sequence $Q$. The cost $C(s_p,s_q)= C_t(s_p,s_q)+C_q(s_p,s_q)$ where $C_t = max(max(P[s_p][t]),max(Q[s_q][t])) - min(min(P[s_p][t]),min(Q[s_q][t]))$ and $C_p = max(max(P[s_p][p]),max(Q[s_q][p])) - min(min(P[s_p][p]),min(Q[s_q][p]))$

The cost $C(s_p,s_q)$ represents the "diameter" of the element obtained by merging the $s_p$th elements of $P$ and the $s_q$th elements of $Q$.

Each path in the graph builds different $s_p$ and $s_q$ sets. For example, the path 00,11,22,33 has a cost $C_{tot} = C(0,0)+ C(1,1)+ C(2,2)+ C(3,3)$, while the path 00 - 01 - 12 - 22 - 33 has a cost $C_{tot} = C(0,[0,1])+ C([1,2],2)+ C(3,3)$

by Felipe Rojas at May 21, 2015 08:16 PM