# Planet Primates

## April 27, 2015

### Fefe

#### Bug des Tages: Setzt hier jemand Wordpress ein? Echt? ...

Bug des Tages: Setzt hier jemand Wordpress ein? Echt? Immer noch? Wie oft muss das eigentlich noch platzen?

Uniper soll künftig der Teil von E.ON heißen, in dem die konventionelle Stromerzeugung des Energiekonzerns zusammengefasst wird. Der grüne Rest von E.ON soll umziehen - zurück ins Ruhrgebiet.

### CompsciOverflow

#### Bidding Algorithm - processor allocation - ditributed OS

What does the "price" in a bidding algorithm for processor allocation in distributed OS mean?

### StackOverflow

#### Overriding methods map and flatMap in class extending trait Iterator

As a Scala beginner I'm trying to implement a counter for every item of an Iterator being retrieved and processed in a for expression as well as a counter incremented every time a new iteration over one of the "loops" of the expression (outer loop and nested loops) is started. The requirement is to accomplish this without simply placing a statement like counter = counter + 1 at numerous locations in the for expression. The following listing shows my proposed solution to this problem and I would like to know, why method next implementing Iterator's abstract member gets called (and the corresponding counter is incremented) whereas flatMap and map overriding their pendants defined in trait Iterator (and calling them via super) are not called at all.

object ZebraPuzzle {
var starts = 0
var items = 0

class InstrumentedIter[A](it: Iterator[A]) extends Iterator[A] {
private val iterator = it

def hasNext = it.hasNext

def next() = {
items = items + 1
it.next()
}

override def flatMap[B](f: (A) ⇒ GenTraversableOnce[B]): Iterator[B] = {
starts = starts + 1
super.flatMap(f)
}

override def map[B](f: (A) ⇒ B): Iterator[B] = {
starts = starts + 1
super.map(f)
}
} // inner class InstrumentedIter


The corresponding for expression looks like this:

  def solve = {
val first = 1
val middle = 3
val houses = List(first, 2, middle, 4, 5)
for {
List(r, g, i, y, b) <-  new InstrumentedIter(houses.permutations)
if ...
List(eng, span, ukr, jap, nor) <- new InstrumentedIter(houses.permutations)
if ...
if ...
if ...
List(of, tea, milk, oj, wat) <- new InstrumentedIter(houses.permutations)
if ...
...
} yield ...
...
}
...
} // standalone singleton object ZebraPuzzle


I would be grateful if someone could give me a hint how to solve the given problem in a better way. But most of all I am interested to know why my solution overriding Iterator's map and flatMap doesn't work like expected by my somewhat limited brain ;-)

Regards

Martin

### TheoryOverflow

#### Problems solvable by a turing machine that is guaranteed to halt

Consider a Turing machine that is guaranteed to halt in time $O(f(n))$ for a known subset $S$ of input states, where $f$ is some computable function. What is the language that can be recognized by this Turing machine, for some given function $f$, assuming the input states given are only those that are members of $S$?

#### Complexity of algebraic problems

I am looking for a list about complexity of various numerical/algebraic problems. E.g.

Adleman once published a list focused on $P$ and $NP$ but it seems outdated.

Mumford(http://www.dam.brown.edu/people/mumford/alg_geom/papers/1993a-WhatComputeAG-Bayer-DAM.pdf) has a paper on what is computable in Algebraic Geometry without regard to complexity.

Does anyone know the list of (major) discoveries since that list was published?

What are some problems of number theoretic/algebraic flavor whose complexity classes is possibly already known (since list was published), unknown but speculated to be somewhere or unknown but cannot be speculated?

### CompsciOverflow

#### Algorithm to determine Strongly Regular Graph(SRG)

I am looking for an algorithm/formula/theorem to determine whether a graph is Strongly Regular Graph(SRG) or not.

I have an idea-

Let $G = (V,E)$ be a regular graph with $v$ vertices and degree $k$. G is said to be strongly regular if there are also integers λ and μ such that:

Every two adjacent vertices have $λ$ common neighbours.

Every two non-adjacent vertices have $μ$common neighbours.

A graph of this kind is said to be an $SRG(v, k, λ, μ)$. then -

1. v and k can be easily determined(counting each row/column), then use $(v-k-1)μ=k(k-1-λ)$ to get an equation of $λ, μ$.
2. for each vertex of v , above equation must hold $iff$ the graph is Strongly Regular Graph(SRG).

correct?? are there algo: regarding the question?? any reference/info would help, thanks.

### CompsciOverflow

#### Complexity of computing optimal pure follower-strategy in response to leader's mixed strategy

I'm trying to see whether the following problem is NP hard and if so, how I can reduce an NP complete problem to it.

There is a leader who is defending a set of targets. His pure strategies are subsets of these targets with size at most $k$. His mixed-strategy is a probability distribution over these pure strategies.

There is a follower who wants to attack these targets. His pure strategies are subsets of these targets as well.

The leader catches the follower and receives a payoff of 0 if the follower attacks any target that is covered by the leader. The follower receives a certain payoff if he is able to successfully attack a set of targets (i.e., none of these targets are covered by the leader).

The follower knows the pure-strategies of the leader, as well as the mixed-strategy. However, he does not know what particular pure-strategy the leader will play. His aim is to find the optimal pure-strategy (set of targets) to play that will guarantee him the most payoff.

This is solvable with a simple linear program. But I'm trying to see if it is NP hard and what NP complete problem I can reduce to it. The strategy space for the follower grows exponentially since he has $\sum\limits_i^k\binom{n}{i}$ pure strategies available to him. He has to figure out which one of these give him the best payoff.

For a particular pure-strategy, the follower's expected payoff is the sum of the product of the payoff for playing that particular strategy against a pure leader-strategy and the probability that the leader plays that pure strategy. So it is basically $\sum\limits_{D_i \in \mathcal{D}}p(A, D_i)d_i$, where $D_i$ is a pure leader-strategy, $A$ is a particular pure strategy, and $d_i$ is the probability that the leader plays the $D_i$ strategy. He wants to find the highest expected payoff, and so essentially the maximum out of a total of $|\mathcal{A}| \times |\mathcal{D}|$ options.

I've seen a particular example being reduce to 3-SAT. That particular example had an follower choosing a path through a graph, and the leader could block him by placing resources on edges. In my example I just have a set of targets that the attacker can attack.

### StackOverflow

#### What is the equivalent of Ansible's delegate_to in Puppet

When using the Ansible provisioner, the delegate_to: ip_address can be used to execute actions on the machine that originally invoked ansible (the host) instead of the guest.

When using Puppet, what would be a similar equivelent?

#### Scala IDE and Apache Spark -- different scala library version found in the build path

I have some main object:

import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._

object Main {

def main(args: Array[String]) {
val sc = new SparkContext(
new SparkConf().setMaster("local").setAppName("FakeProjectName")
)
}
}


...then I add spark-assembly-1.3.0-hadoop2.4.0.jar to the build path in Eclipse from

Project > Properties... > Java Build Path :

...and this warning appears in the Eclipse console:

More than one scala library found in the build path
(C:/Program Files/Eclipse/Indigo 3.7.2/configuration/org.eclipse.osgi/bundles/246/1/.cp/lib/scala-library.jar,
This is not an optimal configuration, try to limit to one Scala library in the build path.
FakeProjectName Unknown Scala Classpath Problem


Then I remove Scala Library [2.10.2] from the build path, and it still works. Except now this warning appears in the Eclipse console:

The version of scala library found in the build path is different from the one provided by scala IDE:
2.10.4. Expected: 2.10.2. Make sure you know what you are doing.
FakeProjectName Unknown Scala Classpath Problem


Is this a non-issue? Either way, how do I fix it?

### QuantOverflow

#### Effective & Maturity Date Modified Following

I am constructing discount curve for tenor 1 month.

First Instrument - PLN_1M_WIBOR has Effective Date on 2015-01-29 (spot). I was wondering what Maturity Date should be? 2015-02-27 or 2015-03-02? I am using modified following convention. According to to this convention I suppose it should be 2015-02-27, but I am not sure.

Second instrument is FRA_0102 dated on today (2015-01-27), so its Effective Date should be 2015-02-27?

### TheoryOverflow

#### Dimensionality reduction in machine learning

This is less of a question and more of a "here's my take let me know if you agree" (so I guess it might turn into a big-list?).

Dimensionality reduction refers to a collection of techniques that input data and return a lower-dimensional version, with some distortion. PCA and Johnson-Lindenstrauss are the most common examples.

From an algorithmic perspective, the tradeoff is clear: lower dimensionality yields faster runtimes and reduced storage space, but compromises precision. I call this the dimension-distortion tradeoff.

From a statistical/information-theoretic perspective, the situation seems less clear. It is commonly believed that dimensionality reduction (PCA in particular) has a denoising effect and thus should actually improve the performance. On the other hand, dimensionality reduction does discard information, which might cause the performance to degrade. Thus, one must address the statistical question: is the information I'm discarding noise or potentially useful (and even if useful, might the benefits of lower dimension still outweigh the losses)?

There appear to be very few formal analyses of the statistical benefits of dimensionality reduction. One that I'm aware of is in our ALT'13 paper (with Gottlieb and Krauthgramer). The setting there is fairly general -- metric spaces.

Are there other formal analyses of the statistical benefits of dimensionality reduction? Perhaps other tradeoffs besides those mentioned above?

### StackOverflow

#### scala nested for comprehension with futures

My case domain classes are as below

case Account(randomId: String, accounts: List[String]) // for each of accounts i need to get AccountProfiles.

case AccountProfiles(actId: String, profiles: List[String], additionalInfo: Map[String, String], ......)

case AccountInfo(id: String, profiles:List[String]) // for each of AccountProfiles I need to construct AccountInfo


my access layer implementation signature to extract above domain classes look like below

getLinked(): Future[Account]
getAccountProfile(actId: String): Future[AccountProfiles]


Can I have a for comprehension to construct Future list of AccountInfo domain object with the help of getLinked and getAccountProfile methods ?

#### How can I set file creation times in ZFS?

I've just got a NAS running ZFS and I'd like to preserve creation times when transferring files into it. Both linux/ext4 (where the data is now) and zfs store creation time or birth time. In the case of zfs it's even reported by the stat command. But I haven't been able to figure out how I can set the creation time of a file so it mirrors the creation time in the original file system. Unlike an ext4->ext4 transfer where I can feed debugfs a script to set the file creation times.

Is there a tool similar to debugfs for ZFS?

### CompsciOverflow

#### Why doesn't parallelism necessarily imply non-determinism?

I'm a student reading a book on threads. And I got when I got to non-deterministic and parallel programs, I got a bit confused. I hope you can help me out.

I understand the difference between concurrency and parallelism. I get that concurrent programs are non-deterministic depending on the precise timing of events. "But parallelism doesn't necessarily imply non-determinism" - as said in the book. Why is that?

Does that imply that it's all dependent on the languages that support parallelism. Which implies that these languages should execute parallel programs in a deterministic manner?

Another question that I have is that the timing of events of concurrent programs depend on what exactly on what exactly? The architecture of the machine?

#### Distributed vs parallel computing

I often hear people talking about parallel computing and distributed computing, but I'm under the impression that there is no clear boundary between the 2, and people tend to confuse that pretty easily, while I believe it is very different:

• Parallel computing is more tightly coupled to multi-threading, or how to make full use of a single CPU.
• Distributed computing refers to the notion of divide and conquer, executing sub-tasks on different machines and then merging the results.

However, since we stepped into the Big Data era, it seems the distinction is indeed melting, and most systems today use a combination of parallel and distributed computing.

An example I use in my day-to-day job is Hadoop with the Map/Reduce paradigm, a clearly distributed system with workers executing tasks on different machines, but also taking full advantage of each machine with some parallel computing.

I would like to get some advice to understand how exactly to make the distinction in today's world, and if we can still talk about parallel computing or there is no longer a clear distinction. To me it seems distributed computing has grown a lot over the past years, while parallel computing seems to stagnate, which could probably explain why I hear much more talking about distributing computations than parallelizing.

### StackOverflow

#### Scala Option type upper bound don't understand

I'm reading Functional Programming in Scala, and in chapter 04 the authors implement Option on their own. Now, when defining the function getOrElse they use an upper bound to restrict the type of A to a supertype (if a understood correctly)

So, the definition goes:

sealed trait Option[+A] {
def getOrElse[B >: A](default: => B): B = this match {
case None => default
case Some(a) => a
}
}


So, when we have something like

val a = Some(4)
println(a.getOrElse(None)) => println prints a integer value
val b = None
println(b.getOrElse(Some(3)) => println prints a Option[Integer] value


a has type Option[Int], so A would be type Int. B would be type Nothing. Nothing is a subtype of every other type. That means that Option[Nothing] is a subtype of Option[Int] (because of covariance), right?

But with B >: A we said that B has to be a supertype?! So how can we get an Int back? This is a bit confusing for me...

Anyone care to try and clarify?

### StackOverflow

#### proper use of scala traits and types

New to scala and trying to get the hang of the class system. Here's a simple set up:

sealed trait Shape{
def sides:Int
}

final case class Square() extends Shape {
def sides() = 4
}

final case class Triangle() extends Shape {
def sides() = 3
}


Now, I want to create a function that takes anything of type shape, which we know will have a sides() method implemented, and make use of that method.

def someFunction(a: Shape)={
val aShape = a()
aShape.sides()
}


But this hits an error at val aShape = a(), as there's no type a.

I realize that in this example, it's excessive to create someFunction, since sides() can be accessed directly from the objects. But my primary question is in the context of someFunction - I'd like to pass a class to a function, and instantiate an object of that class and then do something with that object. Thanks for your help.

#### Proper use of Scala traits and case objects

Trying to get the hang of Scala classes and traits, here's a simple example. I want to define a class which specifies a variety of operations, that could be implemented in lots of ways. I might start with,

sealed trait Operations{
def multiply
}


So for example, I might instantiate this class with an object does that add and multiply very sensibly,

case object CorrectOperations extends Operations{
def multiply(a:Double,b:Double)= a*b
}


And also, there could be other ways of defining those operations, such as this obviously wrong way,

case object SillyOperations extends Operations{
def add(a:Double,b:Double)= a + b - 10
def multiply(a:Double,b:Double)= a/b
}


I would like to pass such an instance into a function that will execute the operations in a particular way.

 def doOperations(a:Double,b:Double, op:operations) = {
}


I would like doOperations to take any object of type operations so that I can make use of the add and multiply, whatever they may be.

What do I need to change about doOperations, and what am I misunderstanding here? Thanks

### QuantOverflow

#### Why the expected return rate of a stock has nothing to do with its option price?

OK, I admit that this is a frequently asked question. But I couldn't find a satisfying answer after I read the explanations of books, went through the derivations of B-S formula, and searched answers online. My question is that, I can understand the derivation of the B-S formula, but what is the intuition that the expected return rate of a stock has nothing to do with its option price?

Suppose I have two stocks A and B, the price is the same today, both worth 20 dollars. Stock A has a expected return of 0.5 dollars/week, a volatility of 50%; stock B has a expected return of 10 dollars/week, a volatility of 1%. For call options with strike price 40 dollars expiring in 1 month(4 weeks), how can the option price for stock A is greater than that for B, since stock A is expected to worth only 22 dollars while B is worth 60 dollars? Any intuitive explanations?
I can also make examples where stock A has vanishingly expected return and B has infinitely large expected return.

### StackOverflow

#### Wiki xml parser - org.apache.spark.SparkException: Task not serializable

I am newbie to both scala and spark, and trying some of the tutorials, this one is from Advanced Analytics with Spark. The following code is supposed to work:

import com.cloudera.datascience.common.XmlInputFormat
val conf = new Configuration()
conf.set(XmlInputFormat.START_TAG_KEY, "<page>")
conf.set(XmlInputFormat.END_TAG_KEY, "</page>")
classOf[LongWritable], classOf[Text], conf)

val rawXmls = kvs.map(p => p._2.toString)

import edu.umd.cloud9.collection.wikipedia.language._
import edu.umd.cloud9.collection.wikipedia._

def wikiXmlToPlainText(xml: String): Option[(String, String)] = {
val page = new EnglishWikipediaPage()
if (page.isEmpty) None
else Some((page.getTitle, page.getContent))
}

val plainText = rawXmls.flatMap(wikiXmlToPlainText)


But it gives

scala> val plainText = rawXmls.flatMap(wikiXmlToPlainText)
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:166) at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:158)
at org.apache.spark.SparkContext.clean(SparkContext.scala:1622)
at org.apache.spark.rdd.RDD.flatMap(RDD.scala:295)
...


Running Spark v1.3.0 on a local (and I have loaded only about a 21MB of the wiki articles, just to test it).

All of http://stackoverflow.com/search?q=org.apache.spark.SparkException%3A+Task+not+serializable didn't get me any clue...

Thanks.

### QuantOverflow

#### (Re) normalisation of random variable in Monte-Carlo simulations

I have a very simple model (CIR) with a very simple discretisation scheme (Euler) and I use it to do Monte-Carlo Simulations. It is working.

Someone insisted that renormalization of my random variables would give better results. I.e. after drawing my normally distributed random variables I should translate them to obtain exactly mean 0 and multiply it to obtain variance 1. I have never heard of this simple technique before.

On a theoretical point of view I am unsure that my new random variables have the normal distribution I wanted. On a practical point of view the change in the result is ridiculous.

Is this technique really working ? Do you have an exemple where this is usefull ? Or a counter exemple where this is very wrong ?

### Lobsters

#### What are you working on this week?

This is the weekly thread to discuss what you’ve done recently and are working on this week.

### StackOverflow

#### convert function from scala to python [on hold]

I have the following function written with scala . I am newbie with python and spark

import org.apache.spark.mllib.linalg._
def toRDD(m: Matrix): RDD[Vector] = {
val columns = m.toArray.grouped(m.numRows)
val rows = columns.toSeq.transpose // Skip this if you want a column-major RDD.
val vectors = rows.map(row => new DenseVector(row.toArray))
sc.parallelize(vectors)
}


How to convert it into python ?

#### How to roll up into Any and then unroll into the generic type in scala

As you roll up something like instances of Flag[T], you end up rolling into Flag[_] or Flag[Any] like so

val seq: Seq[Flag[_]] = new Flag[Int](5) :+ new Flag[MyFlag](someinstance) :+ new Flag[String]("stringValue")


so inside Flag, I added a

def genericType(implicit m: Manifest[T]): Class[_] = m.runtimeClass


We also have inside the Flag class a

def value: T = { //the value }}


so now as I loop over Flag[_], I need to pass value and classOf[T] into a template function

def someTemplateFunction[F](clazz: Class[F], value: F)


How can I achieve this? and should I be using [_], [Any] or [Nothing]?

I finally got something that compiles but had to use a Java function like so to get there(about to test it all out as I am not sure it will work at all):

public class BindGuiceByType {
public static void bind(Binder binder, String flagName, Object value, Class clazz) {
Key key = Key.get(clazz, new FlagImpl(flagName));
binder.bind(key).toInstance(value);
}
}


### QuantOverflow

#### Forward Curves and Par Yield Curves

I'm recently reading a research paper on the yield curve by Salomon brothers and in it it states that when the forward curve is above the par yield curve, it is seen as cheaper. If for example, the years 9-12 of the forward rate curve lie above the par yield curve with the forward 12 year rate above the 9 year rate as well, it is recommended to buy the 12 year bond while selling the 9 year bond.

Unfortunately, I am unable to accurately grasp the concept behind this in relation to the par yield curve. Please help! Thank you!

### StackOverflow

#### Scala case class private constructor but public apply method

If I have the following case class with a private constructor and I can not access the apply-method in the companion object.

case class Meter private (m: Int)

val m = Meter(10) // constructor Meter in class Meter cannot be accessed...


Is there a way to use a case class with a private constructor but keep the generated apply-method in the companion public?

I am aware that there is no difference (in my example) between the two options:

val m1 = new Meter(10)
val m2 = Meter(10)


but I want to forbid the first option.

-- edit --

Surprisingly the following works (but is not really what i want):

val x = Meter
val m3 = x(10) // m3  : Meter = Meter(10)


#### Multiple scala libraies causing error in intellij?

I am using intellij 14 with scala 2.11.6 installed using home brew and symlink using

ln -s /usr/local/Cellar/scala/2.11.6/libexec/src /usr/local/Cellar/scala/2.11.6/src
ln -s /usr/local/Cellar/scala/2.11.6/libexec/lib  /usr/local/Cellar/scala/2.11.6/lib
mkdir -p /usr/local/Cellar/scala/2.11.6/doc/scala-devel-docs
ln -s /usr/local/Cellar/scala/2.11.6/share/doc/scala /usr/local/Cellar/scala/2.11.6/doc/scala-devel-docs/api


I tried running a simple hello world but run into the following issue.

Error:scalac: Multiple 'scala-library*.jar' files (scala-library.jar, scala-library.jar, scala-library.jar) in Scala compiler classpath in Scala SDK scala-sdk-2.11.6


Edit:

So I check the compiler class path on global libraries and apparently there are multiple scal-library.jar

file:///usr/local/Cellar/scala/2.11.6/idea/lib/scala-library.jar
file:///usr/local/Cellar/scala/2.11.6/lib/scala-library.jar
file:///usr/local/Cellar/scala/2.11.6/libexec/lib/scala-library.jar


Does anyone know why?

#### Extracting the complete call graph of a scala project (tough one)

I would like to extract from a given Scala project, the call graph of all methods which are part of the project's own source.

As I understand, the presentation compiler doesn't enable that, and it requires going down all the way down to the actual compiler (or a compiler plugin?).

Can you suggest complete code, that would safely work for most scala projects but those that use the wackiest dynamic language features? for the call graph, I mean a directed (possibly cyclic) graph comprising class/trait + method vertices where an edge A -> B indicates that A may call B.

Calls to/from libraries should be avoided or "marked" as being outside the project's own source.

Thanks!

### TheoryOverflow

#### Do these set systems imply a partition?

During research, I hit a set theoretic claim that I could neither proof nor disproof.

Let $S_1,S_2,S_3$ be three set systems over the same universe $U$ such that

1. $S_1,S_2,S_3$ are closed w.r.t. $\cap$ and $\cup$ and

2. for each pair $u,v \in U$, there are two disjoint sets $X \in S_i, Y \in S_j$, $i,j \in \{1,2,3\}$, $i \neq j$, such that $u \in X$ and $v \in Y$.

Then, there is a partition of $U$ into at most three sets, each from a different set system.

### CompsciOverflow

#### What advantages does Block Programming Environment have over High Level Programming Language

I need help on my GCSE computing homework and I need to know what advantages block programming environments like app inventor have over high level programming languages like python or java.

### AWS

#### DynamoDB Update – Improved JSON Editing & Key Condition Expressions

Thousands of customers use Amazon DynamoDB to build popular applications for Gaming (Battle Camp), Mobile (The Simpsons Tapped Out), Ad-tech (AdRoll), Internet-of-Things (Earth Networks) and Modern Web applications (SmugMug).

We have made some improvements to DynamoDB in order to make it more powerful and easier to use. Here’s what’s new:

1. You can now add, edit, and retrieve native JSON documents in the AWS Management Console.
2. You can now use a friendly key condition expression to filter the data returned from a query by specifying a logical condition that must be met by the hash or hash range keys for the table.

Let’s take a closer look!

Native JSON Editing
As you may know, DynamoDB already has support for storage, display, and editing of JSON documents (see my previous post, Amazon DynamoDB Update – JSON, Expanded Free Tier, Flexible Scaling, Larger Items if this is news to you). You can store entire JSON-formatted documents (each up to 400 KB) as single DynamoDB items. This support is implemented within the AWS SDKs and lets you use DynamoDB as a full-fledged document store (a very common use case).

You already have the ability to add, edit, and display JSON documents in the console in DynamoDB’s internal format. Here’s what this looks like:

Today we are adding support for adding, editing, and display documents in native JSON format. Here’s what the data from the example above looks like in this format:

You can work with the data in DynamoDB format by clicking on DynamoDB JSON. You can enter (or paste) JSON directly when you are creating a new item:

You can also view and edit the same information in structured form:

Key Condition Expressions
You already have the ability to specify a key condition when you call DynamoDB’s Query function. If you do not specify a condition, all of the items that match the given hash key will be returned. If you specify a condition, only items that meet the criteria that it specifies will be returned. For example, you could choose to retrieve all customers in Zip Code 98074 (the hash key) that have a last name that begins with “Ba.”

With today’s release, we are adding support for a new and easier to use expression-style syntax for the key conditions. You can now use the following expression to specify the query that I described in the preceding paragraph:

      zip_code = "98074" and begins_with(last_name, "Ba")


The expression can include Boolean operators (=, <, <=, >, >=), range tests (BETWEEN/AND), and prefix tests (begins_with). You can specify a key condition (the KeyCondition parameter) or a key condition expression (the KeyConditionExpression parameter) on a given call to the Query function, but you cannot specify both. We recommend the use of expressions for new applications. To learn more, read about Key Condition Expressions in the DynamoDB API Reference.

Available Now
These features are available now and you can start using them today!

Jeff;

### CompsciOverflow

#### Turing Machine that semidecides a language all strings of length 4

I was going through some Halting Problem reduction and I found the following question:

Given a TM M, does the language that semi-decides contain all strings of exactly length $4$?

The question later asks to prove whether this is recursive or r.e. Going through some practicing, I came to the conclusion that this is must be r.e., since otherwise such problem would solve the Halting Problem.

So, I attempted this question by showing that $$HP < 4LENGTH$$

My approach is to show that $4LENGTH$ is recursive and then showing the contraddiction, that $HP$ is not recursive, so is $4LENGTH$.

However I can't understand how to construct such machine. Moreover, by looking at some similar proof I see that they suggest to build a machine $M'$ that erases the tape, writes $I$ (input) on the tape and simulate $M$. However, I can't follow such proof.

Can someone describe how to solve such problem or give any direction?

### StackOverflow

#### How can I get the behavior of GNU's readlink -f on a Mac?

On Linux, the readlink utility accepts an option -f that follows additional links. This doesn't seem to work on Mac and possibly BSD based systems. What would the equivalent be?

Here's some debug information:

$which readlink; readlink -f /usr/bin/readlink readlink: illegal option -f usage: readlink [-n] [file ...]  ### QuantOverflow #### The Law of One Price in a discrete model The following question assumes familiarity with the discrete model described in chapter 5 of Steven Roman's "Introduction to the Mathematics of Finance", 2nd edition, Springer 2012. I will not describe the model or the associated notation in this post. 1. The Law of One Price (p. 132) states that, in the absence of arbitrage opportunity in the market, $$\mathcal{V}_T(\Phi_1) = \mathcal{V}_T(\Phi_2) \implies \mathcal{V}_k(\Phi_1) = \mathcal{V}_k(\Phi_2)$$ for all times$0 \leq k \leq T$and for all self-financing trading strategies$\Phi_1$and$\Phi_2$. Unfortunately, no proof is provided in the text (in fact, this law is stated as a definition rather than a theorem). Why does this law hold? 2. Additionally, it is implied by the text following the statement of the Law of One Price, that, if the market has no arbitrage opportunity, then, given an attainable alternative$X$, if a new asset$a^*$is introduced into the market and is priced in such a way that its payoff at time$t_T$is$X$and its pricing is consistent with the Law of One Price, i.e. for every$k \in \{0, 1, \dots, T\}$,$S_{a^*, k} := \mathcal{V}_k(\Phi)$, where$\Phi$is any self-financing trading strategy such that$\mathcal{V}_T(\Phi) = X$, then the resulting, extended market will still have no arbitrage opportunity. Why is this so? ### StackOverflow #### freebsd compile is so complicated? I want to add custom syscall to freebsd(school work). I google hundreds of time. there is no right solution for it. my homework is: "Add custom syscall to freebsd kernel and recompile the kernel and use it". finally I find that I should follow instructions in these two pages: then will it shows errors in compile time: <sys/parma.h> no such file or directory <sys/kern.h> no such file or directory <sys/syscallargs.h> no such file or directory  I removed these three header include form my file then recompile it. now shows other errors like: MAXCPU undeclered in pcpu.h file. what I missed? how can I do my school work? NOTE: I use freebsd8 in vbox #### Executing scala code from a jar, clarification needed I have a maven project, which produces 1 jar file at the end. Project contains 2 modules, of which one is Java and the other one is Scala. Scala module uses Java code for some backend things. In my example, Java is actual logic, Scala is business rules. This is how I am executing my code. Neither way i am trying works java.lang.ClassNotFoundException: scala -cp /usr/share/scala/lib/scalatest_2.11-2.2.4.jar org.scalatest.tools.Runner -p '/home/program-201504.22-SNAPSHOT.jar:/home/me/git/program/scala/target/scala-program-201504.22-SNAPSHOT.jar:/home/me/temp/lib/.' -o -fWDF /home/me/git/program/scala/target/surefire-reports/TestSuite.txt -u /home/me/git/program/scala/target/surefire-reports/. -s a.b.engine.driver.MyClass  java.lang.ClassNotFoundException: scala -cp /usr/share/scala/lib/scalatest_2.11-2.2.4.jar:/home/program-201504.22-SNAPSHOT.jar:/home/me/temp/lib/.:/home/me/git/program/scala/target/scala-program-201504.22-SNAPSHOT.jar org.scalatest.tools.Runner -p '/usr/share/scala/lib/scalatest_2.11-2.2.4.jar:/home/program-201504.22-SNAPSHOT.jar:/home/me/temp/lib/.:/home/me/git/program/scala/target/scala-program-201504.22-SNAPSHOT.jar' -o -fWDF /home/me/git/program/scala/target/surefire-reports/TestSuite.txt -u /home/me/git/program/scala/target/surefire-reports/. -s a.b.engine.driver.MyClass  java.lang.NoClassDefFoundError scala -cp /usr/share/scala/lib/scalatest_2.11-2.2.4.jar org.scalatest.tools.Runner -R "/home/program-201504.22-SNAPSHOT.jar /home/me/git/program/scala/target/scala-program-201504.22-SNAPSHOT.jar /home/me/temp/lib/." -o -fWDF /home/me/git/program/scala/target/surefire-reports/TestSuite.txt -u /home/me/git/program/scala/target/surefire-reports/. -s a.b.engine.driver.MyClass  Please help me understand what am I doing wrong. 1. I confirmed that a.b.engine.driver.MyClass exists in /home/me/git/program/scala/target/scala-program-201504.22-SNAPSHOT.jar ### CompsciOverflow #### How to combine items in a list into one long list? [on hold] python 3.3 Hi. list = ['apple', 'banana', 'carrot', 'dinosaur', 'elephant'] This is my list, that I want to be in one long list like this with single characters: list = ["a", "p", "p", "l", "e", "b", "a", "n"-----] etc. How do I achieve this? ### StackOverflow #### Intellij: "Error running Scala Console: Cannot Start Process" New install. Scala SBT project. Full message is: Error running Scala Console: Cannot start process, the working directory C:\Program Files (x86)\JetBrains\IntelliJ IDEA Community Edition 13.1.4\jre\jre\bin does not exist It would be nice to have the directory default to somewhere sensible in the current project...I think. #### Passing request context implicitly in an actor system I would like to propagate a request context implicitly in a system of collaborating actors. To simplify and present the situation, my system has multiple actors and the messages passed to these actors need to include this RequestContext object. ActorA receives messages of type MessageA ActorB receives messages of type MessageB when ActorA needs to send a message to ActorB, as part of the handling of MessageA, it performs business logic, and then constructs a MessageB from results of the logic as well as the RequestContext available in MessageA and then sends it to ActorB def handle(ma:MessageA) { val intermediateResult = businessLogic(ma) actorB ! MessageB(intermediateResult, ma.requestContext) }  We have a slew of messages to be handled, and explicitly passing around the requestContext is cumbersome. I am trying of creative ways to use Scala's implicits feature to avoid explicitly injecting the RequestContext embedded within incoming message into the outgoing message. The messages are case classes (and they need to be). I have read about implicits rules but bringing an attribute of an object into current implicit scope appears far-fetched. This, I am sure should be a common requirement. Any suggestions ? Thanks. #### SBT with Maven Central - get rid of Scala version in artifact url I am trying to use in my Scala project Java library which is on Maven Central. While resolving this dependency, SBT appends Scala version to the repository url which obviously does not exist in such a format. Can I somehow disable appending Scala version for this specific artifact? #### How to count characters of a String? I am a pretty newbie to Scala! I would like to count the times a char occurs in a string. How do I do this? I started writing something like this but I find the syntax very hard to grasp. Any help?  var s = "hello" var list = s.toList.distinct list.foreach(println(s.count(_=='list')))  #### Creating Spark application using wrong Scala version I am following the instructions here: https://spark.apache.org/docs/latest/quick-start.html to create a simple application that will run on a local standalone Spark build. In my system I have Scala 2.9.2 and sbt 0.13.7. When I write in my simple.sbt the following: scalaVersion := "2.9.2" after I use sbt package, I get the error: sbt.ResolveException: unresolved dependency: org.apache.spark#spark-core_2.9.2;1.3.1: not found However, when I write in simple.sbt: scalaVersion := "2.10.4" sbt runs successfully and the application runs fine on Spark. How can this happen, since I do not have scala 2.10.4 on my system? ### Lobsters #### Reducing power consumption on Haswell and Broadwell systems ### StackOverflow #### Scala: Parsing json using asOpt[T] return None although I have value I'm trying to extract value using asOpt[T] as I may not have the key. I get in return None although I have a value in my string. val jsonString = {"data": {"operation":"+", "value":"1"},"right": {"data":{"operation":"-", "value":"2"}},"left":{"data":{"operation":"-", "value":"4"}}} buildFromJson(jsonString)  Code: import play.api.libs.json.Json ... def buildFromJson(jsonString: String) : TreeNode = { val nodeData = json.$$"data") val nodeValue = nodeData.\("operation").as[String] //This works println("left= " + json.\("left").asOpt[String]) //output= None println("left= " + json.\("left").as[String]) // throws exception println("left= " + json.\("left").toString) //output= {"data":{"operation":"-", "value":"4"}}  What is the best way to parse JSON key that might not exist? As I read it is asOpt, but it doesn't work. Update - How I overcome the problem: 1) First of all m-z was right, it was indeed object and could not return as String. 2) I couldn't find a good way to extract field that might not be exist (as at the beginning I added 'right' to json only if right sub node was exist). Now I add to the json 'right' field with null value if it doesn't exist. ### Fefe #### Mir hat ein Jurist mal die tatsächliche Rechtsgrundlage ... Mir hat ein Jurist mal die tatsächliche Rechtsgrundlage für Auslandseinsätze der Bundeswehr gemailt. Ich zitiere: Das Bundesverfassungsgericht hat die Rechtsgrundlage der Bundeswehr in out-of-area Fällen in Art. 24 GG verortet. Die Leitentscheidung des Gerichts, die das erstmals erwähnt hat („erfunden hat“) ist im 90. Band (verfügbar etwa unter: http://www.servat.unibe.ch/dfr/bv090286.html). Die verfassungsrechtliche Idee des Gerichts war: Wenn man sich - so der Inhalt von Art. 24 GG - als Bundesrepublik Deutschland an internationalen Organisationen beteiligen darf, die ihrerseits militärische Operationen durchführen (zB UNO, später auch die NATO), dann ermächtigt das Grundgesetz damit auch implizit zu out-of-area Einsätzen der Bundeswehr. Ob das zwingend ist, diese implizite Annahme zu unterstellen, kann man bezweifeln. Jedenfalls sagt unser Verfassungsgericht das. Die Bundeswehr sollte also auch Art. 24 GG zitieren. Oder Du machst die „richtige Rechtsgrundlage“ zumindest in Deinem Blog bekannt? Mach ich doch glatt! ### CompsciOverflow #### Countability union of all finite and countably infinite sequences over finite alphabet Is the set of all finite and countably infinite sequences over \{0,1\} countable? From my analysis, I think it is countable. I think of this as the set of all strings from a finite alphabet \Sigma=\{0,1\}, hence \Sigma^*. (is this a good assumption?). I later show that I can count each string in the following order: 0,1 (length 1), 00, 01, 10, 11 (length 2) and so on. However, I am very confused from the initial requirement "finite and countably infinite sequences". Does my method account for the countably infinite strings? ### StackOverflow #### Not all akka stream Sinks receive the emitted data When running the following akka streaming FlowGraph not all the emitted Chars are received by all Sinks. package sample.stream import java.io.{ FileOutputStream, PrintWriter } import akka.actor.ActorSystem import akka.stream.ActorFlowMaterializer import akka.stream.scaladsl.{ Broadcast, FlowGraph, Sink, Source } import scala.concurrent.forkjoin.ThreadLocalRandom import scala.util.{ Failure, Success, Try } object Sample { def main(args: Array[String]): Unit = { println("start") implicit val system = ActorSystem("Sys") import system.dispatcher implicit val materializer = ActorFlowMaterializer() var counter = -1 val countSource: Source[Char, Unit] = Source(() => Iterator.continually { counter += 1; (counter + 'A').toChar }.take(11)) var counter1 = 0 val consoleSink1 = Sink.foreach[Char] { counter => println("sink1:" + counter1 + ":" + counter) counter1 += 1 Thread.sleep(100) //Thread.sleep(300) } var counter2 = 0 val consoleSink2 = Sink.foreach[Char] { counter => println("sink2:" + counter2 + ":" + counter) counter2 += 1 Thread.sleep(200) } val materialized = FlowGraph.closed(consoleSink1, consoleSink2)((x1, x2) => x1) { implicit builder => (console1, console2) => import FlowGraph.Implicits._ val broadcast = builder.add(Broadcast[Char](2)) countSource ~> broadcast ~> console1 broadcast ~> console2 }.run() // ensure the output file is closed and the system shutdown upon completion materialized.onComplete { case Success(_) => system.shutdown() case Failure(e) => println(s"Failure: {e.getMessage}") system.shutdown() } println("waiting the remaining ones") //scala.concurrent.Await.ready(materialized, scala.concurrent.duration.DurationInt(100).seconds) //system.shutdown() println("end") } }  After running the following output is generated [info] Running sample.stream.Sample [info] start [info] waiting the remaining ones [info] end [info] sink2:0:A [info] sink1:0:A [info] sink1:1:B [info] sink1:2:C [info] sink2:1:B [info] sink1:3:D [info] sink2:2:C [info] sink1:4:E [info] sink1:5:F [info] sink2:3:D [info] sink1:6:G [info] sink1:7:H [info] sink2:4:E [info] sink2:5:F [info] sink1:8:I [info] sink1:9:J [info] sink2:6:G [info] sink2:7:H [info] sink1:10:K  The second sink doesn't receive the 8th, 9th and 10th values: IJK but still the entire flow is ended. What should I do to wait for both Sinks to consume all the data? I discovered that if I change the (x1,x2)=>x1 to (x1,x2)=>x2 this will wait. That is the same with sleeping 300ms in the first sink. #### Connect to a server in Scala [duplicate] This question already has an answer here: I have an assignment in a discipline from my graduation course where I have to communicate with a server. I am allowed to do that in any language, so I chose Scala. I only receive the port that I am supposed to do that, which is 127.0.0.1:50200. I would like to know how would I make that connection in Scala. Is it a library or something built in already in the language? I know it is probably really simple, but I have never done something like this. Ps.: Note that the server is an application that is running on my computer. ### Lobsters #### Refinement types in Haskell as a library ### StackOverflow #### Java error: "Comparison method violates its general contract!" I have this code: package org.optimization.geneticAlgorithm; import org.optimization.geneticAlgorithm.selection.Pair; public abstract class Chromosome implements Comparable<Chromosome> { public abstract double fitness(); public abstract Pair<Chromosome> crossover(Chromosome parent); public abstract void mutation(); public int compareTo(Chromosome o) { int rv = 0; if (this.fitness() > o.fitness()) { rv = -1; } else if (this.fitness() < o.fitness()) { rv = 1; } return rv; } }  And every time I run this code I get this error: Exception in thread "main" java.lang.IllegalArgumentException: Comparison method violates its general contract! at java.util.ComparableTimSort.mergeHi(ComparableTimSort.java:835) at java.util.ComparableTimSort.mergeAt(ComparableTimSort.java:453) at java.util.ComparableTimSort.mergeCollapse(ComparableTimSort.java:376) at java.util.ComparableTimSort.sort(ComparableTimSort.java:182) at java.util.ComparableTimSort.sort(ComparableTimSort.java:146) at java.util.Arrays.sort(Arrays.java:472) at java.util.Collections.sort(Collections.java:155) at org.optimization.geneticAlgorithm.GeneticAlgorithm.nextGeneration(GeneticAlgorithm.java:74) at org.optimization.geneticAlgorithm.GeneticAlgorithm.execute(GeneticAlgorithm.java:40) at test.newData.InferenceModel.main(InferenceModel.java:134)  I use OpenJDK7u3 and I return 0 when the objects are equal. Can someone explain this error to me? ### QuantOverflow #### What is the gross accounting relation of Cobb-Douglas function? We have Cobb-Douglas function like this Y=AK^\alpha L^{1-\alpha}, in one of the book, it deduce like this: How can we get this formula? \frac{\Delta Y}Y = \frac{\Delta A}A+\alpha\frac{\Delta K}K+(1-\alpha)\frac{\Delta L}L ### CompsciOverflow #### Is it decidable whether a pushdown automata recognizes a given regular language? The problem whether two pushdown automata recognize the same language is undecidable. The problem whether a pushdown automata recognizes the empty language is decidable, hence it is also decidable whether it recognizes a given finite language. It is undecidable whether the language accepted by a pushdown automata is regular. But ... ... is it decidable whether a pushdown automata recognizes a given regular language? In case the answer is no, does the problem becomes decidable if the given regular language has star height 1? ### /r/emacs #### Org-mode with CUA I'm using CUA mode so when I select text, I lose any of the C-c bindings like converting items to headings and vice versa. What's the best way for me to change the key bindings in org-mode to accomodate CUA mode? submitted by grok_life [link] [5 comments] ### StackOverflow #### Declare a variable in Scala whose type is same as another variable Lets say i have a variable val allF = (Some(1), "some string", 2.99, "another string", 1, Some(30))  Now i want to declare a few more variables of type same as allF i.e. (Some[Int], String, Double, String, Int, Some[Int]). I can do var a1: (Some[Int], String, Double, String, Int, Some[Int]) = _ var a2: (Some[Int], String, Double, String, Int, Some[Int]) = _ ... and so on  or i can do type T = (Some[Int], String, Double, String, Int, Some[Int]) var a1: T = _ var a2: T = _ .. and do on  Is there some way i can use the variable allF to get its type and declare variables a1, a2, a3, ... like this val allF = (Some(1), "some string", 2.99, "another string", 1, Some(30)) var a1: typeof(allF) = _ var a2: typeof(allF) = _ ...  UPDATE - Also for situations like this val allF = (Some(1), "some string", 2.99, "another string", 1, Some(30)) xyz match { case y: (???)\\ if y is of the same type as allF }  ### Fefe #### Wisst ihr, was die Bundeswehr jetzt braucht? Uranmunition! ... Wisst ihr, was die Bundeswehr jetzt braucht? Uranmunition! Damit wir uns (nein, wirklich!!) einen Panzerkrieg gegen Russland leisten können! Glaubt ihr nicht? Lest selbst! Die Bundeswehr ist derzeit nicht in der Lage, moderne russische Kampfpanzer wirksam zu bekämpfen. Zwar verfügen die deutschen Streitkräfte mit dem Leopard 2 über einen der besten Kampfpanzer der Welt. Allerdings fehlt es nach Informationen der "Welt am Sonntag" an ausreichend durchschlagskräftiger Munition für dieses Waffensystem. Die auf Wolframbasis hergestellte Pfeilmunition der Bundeswehr produziert nicht genügend kinetische Energie, um die technologisch anspruchsvolle Panzerung der neuesten russischen Gefechtsfahrzeuge vom Typ T90 und modernisierter T80 zu durchschlagen. Hey vielleicht können wir uns einfach darauf einigen, dass wir schlicht keinen Russlandfeldzug mehr führen wollen? WTF?!? Ich meine, wie oft müssen kontinentaleuropäische Mächte eigentlich in Russland einmarschieren, bis wir mal merken, dass das keine gute Idee ist?! Update: Die Dichte von Wolfram und Uran ist übrigens in diesem Universum sehr nahe beieinander, Wolfram liegt sogar leicht vorne. Wolfram hat daher sogar mehr kinetische Energie. Es kostet aber auch viel mehr als Uran, welches praktisch als Abfall anfällt. Update: Noch ein Kommentar: Fakt ist, dass die Kanone des Leopard 2 den Wolfram-Pfeilgeschossen deutlich mehr kinetische Energie mitgibt als andere eingeführte westliche Panzerkanonen, u.A. weil das Rohr ungewöhnlcih lang ist. Bei Verwendung von Uranmunition wäre diese kinetische Energie annähernd identisch. DU-Geschosse sind KE-Penetratoren, die durch hohen Impuls die Panzerung eines Hartziels durchschlagen. Uran eignet sich für diese Einsätze v. a. wegen seiner sehr hohen Dichte, aber auch wegen der Eigenschaft, sich beim Aufschlag so zu verformen, dass eine Spitze erhalten bleibt; daher wird Uranmunition auch als „selbstschärfend“ bezeichnet. Ein zusätzlicher Effekt ist, dass sich beim Aufprall auf ein gepanzertes Ziel heißer Uranstaub bildet, der sich bei Luftkontakt im Inneren spontan entzündet (pyrophorer Effekt). Dadurch kann die mitgeführte Munition oder der Treibstoff entzündet werden, was zu der sogenannten Sekundärexplosion des Zieles führen kann. Update: Ein anderer Einsender merkt an, dass man zum Angreifen von Panzern lieber Tandemhohlladungen nimmt, und die hat die Bundeswehr für den Leopard 2 am Start. #### Übrigens, zum Völkermord an den Armeniern: Laut dieses ... Übrigens, zum Völkermord an den Armeniern: Laut dieses Buches trägt das Kaiserreich übrigens Mitschuld am Völkermord an den Armeniern, via des damaligen Verbündeten, dem Osmanischen Reich. Genau wie die Türkei Rechtsnachfolger des Osmanischen Reichs ist, ist Deutschland Rechtsnachfolger des Kaiserreichs. Völkermord verjährt nicht. Oh und wo wir gerade bei Völkermord waren: "Bundesregierung: Deutschland hat keinen Völkermord an Herero und Nama begangen" Nein, natürlich nicht! Nicht weil es nicht stattgefunden hat, nein, so dreist lügt selbst die Bundesregierung nicht. Sondern weil man das damals nicht nach heutigen Maßstäben bewerten darf!1!! Die brutale Niederschlagung des Aufstandes der Volksgruppen der Herero und Nama durch deutsche Kolonialtruppen zwischen 1904 und 1908 im damaligen Deutsch-Südwestafrika, dem heutigen Namibia, kann nach Auffassung der Bundesregierung nicht nach den heute geltenden Regeln des humanitären Völkerrechts bewertet und daher auch nicht als Völkermord eingestuft werden. Das ist die Antwort auf eine Kleine Anfrage der Linken. Das Gesetz, das das regelt, ist erst von 1955, und basiert auf einer Konvention von 1948, und gilt nicht rückwirkend. Na dann. Update: Einen habe ich noch: Martin Sonneborn dazu. Update: Juristen-Kommentar dazu: "Als Rechtsnachfolge bezeichnet man den Übergang von bestehenden Rechten und Pflichten einer Person auf eine andere („Rechtsnachfolger“)." Die Bundesrepublik ist aber nicht der Rechtsnachfolger des Deutschen Reichts, sondern mit ihm (teil)identisch: Absatz 76: "Die Bundesrepublik Deutschland ist also nicht "Rechtsnachfolger" des Deutschen Reiches, sondern als Staat identisch mit dem Staat "Deutsches Reich", - in bezug auf seine räumliche Ausdehnung allerdings "teilidentisch", so daß insoweit die Identität keine Ausschließlichkeit beansprucht. Die Bundesrepublik umfaßt also, was ihr Staatsvolk und ihr Staatsgebiet anlangt, nicht das ganze Deutschland, unbeschadet dessen, daß sie ein einheitliches Staatsvolk des Völkerrechtssubjekts "Deutschland" (Deutsches Reich), zu dem die eigene Bevölkerung als untrennbarer Teil gehört, und ein einheitliches Staatsgebiet "Deutschland" (Deutsches Reich), zu dem ihr eigenes Staatsgebiet als ebenfalls nicht abtrennbarer Teil gehört, anerkennt." #### Der eine oder andere wird es schon gemerkt haben: Hier ... Der eine oder andere wird es schon gemerkt haben: Hier geht es um Medienkompetenz. Leider gibt es immer noch Leser, die nicht merken, wenn sie gerade an einer Schulungsgelegenheit vorbei laufen. Zum Beispiel neulich, bei der Frage zu Bundeswehr und Auslandseinsätzen. Da habe ich für die Rechtslage nicht etwa auf die tatsächliche Rechtslage verlinkt, sondern auf die Auslegung der Bundeswehr selbst. Gut, die Paragraphen konnten sie jetzt schlecht falsch zitieren, aber sie konnten sie falsch zusammenfassen. Wie man z.B. darauf kommen kann, das Grundgesetz erlaube Kriegseinsätze in Afghanistan, das erschließt sich nicht. Daher zitiert die Bundeswehr hier den Artikel 35 des Grundgesetzes, wo von Ländern die Rede ist. Gemeint sind natürlich Bundesländer, nicht andere Staaten. Auf solche Taschenspielertricks fällt hoffentlich niemand rein, aber ich habe auch nur einen Kommentar gekriegt, wo es jemand gemerkt hat. Und das war jemand von der Grundrechtepartei. Der zählt nicht. Die andere Lektion, an der gefühlt so gut wie alle vorbei gelaufen sind, war die Feminismus-Nummer hier. Leute, strengt euch mal ein bisschen an! So geht das nicht weiter hier. Das haben offenbar diverse Leute als "der Fefe hasst Feministen und mag auch keine Moslems" gewertet. WTF? Tatsächlich war damit natürlich gemeint, dass der Islam als Religion von ein paar Krawallbrüdern in den Schmutz gezogen wird, die die Werte des Islam mit Füßen treten und was von Jihad blubbern. Ähnlich sieht es mit dem Christentum aus (auch wenn das mit der Nächstenliebe sich da zu selten auf Flüchtlinge bezieht und daher Heuchelei diagnostiziert werden muss). Und mit dem Feminismus. Aaaaaaaber. Die Moslems distanzieren sich immer wieder von den Dschihadisten. Die Christen distanzieren sich immer wieder von den Fundi-Spinnern. Nur die Feministen distanzieren sich nicht von ihren Fundi-Spinnern, sondern machen es sich viel lieber in einer schönen Opferrolle der missverstandenen und beschimpften Minderheit bequem, genau wie sich Christen in den USA immer wieder völlig irrational die Rolle der Minderheit zurechtlegen, um sich auch mal verfolgt fühlen zu können. Natürlich habe ich das absichtlich schön zweideutig formuliert, damit es einfach ist, sich einlullen zu lassen und weiter zu scrollen. Aber wer sich einlullen lassen und weiterscrollen will, dem sage ich hiermit mal ganz deutlich ins Gesicht: Du bist bei mir falsch. Also, liebe Leser. Strengt euch mal ein bisschen an. Ich will hier Erkenntnisgewinn und Gegenwehr gegen mundfertig vorbereitete Schnuller-Formulierungen sehen! Auf der anderen Seite will ich wohl einräumen, dass es mich freut, wenn meine plumpen Manipulationsversuche funktionieren. Dass heißt, dass ich das hier nicht umsonst mache, sondern es wirklich Bedarf gibt. Und dass ich es nicht zu schlecht anstelle, sonst würde ja keiner drauf reinfallen. Update: Noch ein Hinweis des Typen von der Grundrechtepartei: Die sagen nicht Gesetzesgrundlage, was die korrekte Terminologie ist, sondern rechtliche Grundlage. Das ist dann vermutlich sowas wie „Premium“ oder „Joghurt mit Fruchtzubereitung“ im Supermarkt. Update: Übrigens war auch der Link falsch :) Er ging zu deren Rechtfertigung für Bundeswehreinsatz im Inland, nicht im Ausland. Die Rechtfertigung für Auslandseinsätze der Bundeswehr basiert tatsächlich auf Art 24 GG. #### Ach nee! Das Kanzleramt wusste anscheinend schon seit ... Ach nee! Das Kanzleramt wusste anscheinend schon seit 2008, dass der BND für die NSA Wirtschaftsspionage in Deutschland betreibt! Damals zuständig waren Herr de Maiziere (heute Innenminister), dann Ronald Pofalla (heute Lobbyist bei der Deutschen Bahn). ### DataTau #### ML/NLP competition: predict hygiene violations in Boston using Yelp review data ### StackOverflow #### How do I specify type parameters via a configuration file? I am building a market simulator using Scala/Akka/Play. I have an Akka actor with two children. The children need to have specific types which I would like to specify as parameters. Suppose that I have the following class definition... case class SecuritiesMarket[A <: AuctionMechanismLike, C <: ClearingMechanismLike](instrument: Security) extends Actor with ActorLogging { val auctionMechanism: ActorRef = context.actorOf(Props[A], "auction-mechanism") val clearingMechanism: ActorRef = context.actorOf(Props[C], "clearing-mechanism") def receive: Receive = { case order: OrderLike => auctionMechanism forward order case fill: FillLike => clearingMechanism forward fill } }  Instances of this class can be created as follows... val stockMarket = SecuritiesMarket[DoubleAuctionMechanism, CCPClearingMechanism](Security("GOOG")) val derivativesMarket = SecuritiesMarket[BatchAuctionMechanism, BilateralClearingMechanism](Security("SomeDerivative"))  There are many possible combinations of auction mechanism types and clearing mechanism types that I might use when creating SecuritiesMarket instance for a particular model/simulation. Can I specify the type parameters that I wish to use in a given simulation in the application.conf file? #### Install scalatest in Scala IDE for Eclipse I have installed Eclipse Luna. Then I installed Scala IDE via Help -> Install new software and adding software site with link from here. Then I installed sbt 0.13 and sbteclipse using this mini-guide and created eclipse project. Then I installed (kindda) scalatest by adding it to my build.sbt. Now it looks like this: val scalaTest = "org.scalatest" % "scalatest_2.11" % "2.2.4" % "test" lazy val commonSettings = Seq( scalaVersion := "2.11.6" ) lazy val root = (project in file(".")). settings(commonSettings: _*). settings( libraryDependencies += scalaTest )  Then I created a test from this example. The file called FirstSpec.scala is located in testProject/src/test/scala-2.11/testProject/. So here is a problem: eclipse seems to not see scalaTest. The second line with import org.scalatest._ is underlined red with error description object scalatest is not a member of package org. And, following this guide, I don't see the option Run As -> ScalaTest - Suite when choosing my test class. At the same time everything goes good and fine when I start sbt session in my test project and type test command. The tests launches and passes. So my questions are: • why eclipse doesn't see the scalatest if I put it in build.sbt's libraryDependencies? What's the point of libraryDependencies then? • Why sbt test runs the tests without a problem? If sbt sees scalatest, why eclipse can't? #### How to prove size of a list in Leon? I am trying to prove that the size (number of elements) in a list is non-negative, but Leon fails to prove it---it just times out. Is Leon really not capable of proving this property, or am I using it wrongly? My starting point is a function I read in the paper "An Overview of the Leon Verification System". import leon.lang._ import leon.annotation._ object ListSize { sealed abstract class List case class Cons(head: Int, tail: List) extends List case object Nil extends List def size(l: List) : Int = (l match { case Nil => 0 case Cons(_, t) => 1 + size(t) }) ensuring(_ >= 0) }  ### QuantOverflow #### What is the most efficient way to periodically download all new 10-K filings from SEC's EDGAR? I found this website which uses a perl script to download all the filings. It states: "There are 200K+ 10-K (and equivalent) filings, which will take considerable harddisk space and time to download. The SEC prefers that bulk-download is done during 'quiet time', i.e., outside the regular trading hours." Is there a better way than deluging the SEC's servers with requests to get this information? ### /r/compsci #### A Paper About Reading Papers: How to Read a Paper #### Audio tutorials/books about algorithms and data structures Everyday I waste 2 hours because of commute. I can't read because of the vibration. I was wondering if you know any rosources that I can listen to learn algorithms and data structures? submitted by n1b2 [link] [comment] ### Lobsters #### Reenix: Implementing a Unix-Like Operating System in Rust ### StackOverflow #### explode json array in schema rdd I have a json like : {"name":"Yin", "address":[{"city":"Columbus","state":"Ohio"},{"city":"Columbus","state":"Ohio"}]} {"name":"Michael", "address":[{"city":null, "state":"California"},{"city":null, "state":"California"}] here address is an array and if i use sqlContext.jsonfile i get the data in schema rdd as follows : [Yin , [(Columbus , Ohio) , (Columbus , Ohio)] [Micheal , [(null, California) , (null, California)]  I want to explode the array present and want the data in the following format in schema rdd : [Yin, Columbus, Ohio] [Yin, Columbus, Ohio] [Micheal, null, California] [Micheal, null, California] I am using spark SQL ### /r/compsci #### Late IEEE Conference paper notification? The notification should have came two days ago. What should I make of this matter? submitted by 12398235123 [link] [2 comments] ### StackOverflow #### How do you provide domain credentials to ansible's mount module? I've figured out how to use the shell module to create a mount on the network using the following command: - name: mount image folder share shell: "mount -t cifs -o domain=MY_DOMAIN,username=USER,password=PASSWORD //network_path/folder /local_path/folder sudo_user: root args: executable: /bin/bash  But it seems like it's better practice to use Ansible's mount module to do the same thing. Specifically, I'm confused about going about providing the options for domain=MY_DOMAIN,username=USER,password=PASSWORD. I see there is a flag for opts, but I'm not quite sure how this would look. #### How do i subtract an RDD[(Key,Object)] from another one? I want to change the format of my data, from RDD(Label:String,(ID:String,Data:Array[Double])) to an RDD Object with the label,id and data as component. So when i print my rdd consecutively two times, the references of objects changes : class Data_Object(private val id:String, private var vector:Vector) extends Serializable { var label = "" ... } First print (1,ms3.Data_Object@35062c11) (2,ms3.Data_Object@25789aa9) Second print (2,ms3.Data_Object@6bf5d886) (1,ms3.Data_Object@a4eb65)  I think that explain why the subtract method doesn't work. So can i use subtract with object as value or do i return to my classic model ? Thank you for your precious help. #### Is using Try[Unit] the propper way? Recently I started to mess around with scala. I came across the concept of try/success/failure. Now I am wondering how to use this concept when a method has the retuntype Unit. Is using Try[Unit] the correct way? Maybe I am to influenced from my Java-background, but is it a good idea to force the caller to deal with the problem? #### How to reference the name of a val in a Object class in scaladoc? Is there a way that I can reference the name of a val in a Object class using scaladoc, similar to {@value #STATIC_FIELD}  in javadoc. ### Planet Theory #### Advice on running a theory day Last semester I ran a THEORY DAY AT UMCP. Below I have ADVICE for people running theory days. Some I did, some I didn't do but wish I did, and some are just questions you need to ask yourself. 1) Picking the day- I had two external speakers (Avrim Blum and Sanjeev Arrora) so I was able to ask THEM what day was good for THEM. Another way is to pick the DAY first and then asking for speakers. 2) Number of external speakers- We had two, and the rest were internal. The external speakers had hour-long talks, the internal had 20-minute talks. This worked well; however, one can have more or even all external speakers. 3) Whoever is paying for it should be told of it towards the beginning of the process. 4) Lunch- catered or out? I recommend catered if you can afford it since good time for people to all talk. See next point. 5) If its catered you need a head count so you need people to register. The number you get may be off- you may want to ask when they register if they want lunch. Then add ten percent. 6) Tell the guest speakers what time is good for them to arrive before they make plans so you can coordinate their dinner the previous night. 7) If have the money and energy do name tags ahead of time. If not then just have some sticky tags and a magic marker. 8) Guest speakers- getting them FROM Amtrak or Airport to dinner/hotel --- give them a personal lift (they may be new to the city and a bit lost). Getting them from the event back TO the airport or amtrak, can call airline limo and taxi. (though if can give a ride, that's of course good.) 9) Pick a day early and stick with it. NO day is perfect, so if someone can't make it, or there is a faculty meeting that day, then don't worry about it. 10) Have website, speakers, all set at least a month ahead of time. Advertise on theory-local email lists, blogs you have access to, and analogs of theory-local for other places (I did NY, NJ, PA). Also email people to spread the word. 11) Advertise to ugrads. Students are the future! 12) If you are the organizer you might not want to give a talk since you'll be too busy doing other things. 13) Well established regular theory days (e.g., NY theory day) can ignore some of the above as they already have things running pretty well. ### Fefe #### Kommentar zu dem S-400-Verkauf:Es gab vor Kurzem einen ... Kommentar zu dem S-400-Verkauf: 1. Es gab vor Kurzem einen englischsprachigen Artikel darüber, dass die Russen ihre Zuversicht in diesen Angelegenheiten geäußert haben. Die Aussage war, dass die Chinesen Jahre zum Nachbauen brauchen, und bis dahin haben die Russen die nächste Generation am Start. Wer reverse engineered hinkt hinterher. 2. Wenn die Chinesen Systeme verwenden, die denen der Russen entsprechen, dann kennen die Russen zumindest die chinesischen Fähigkeiten gut. Es hat sich in mehreren Konflikten gezeigt, dass die genaue Kenntnis der Flugabwehrsysteme diese bereits großteils entwertet (z.B. Roland auf Falklands, die Briten wussten alles Wesentliche darüber von den Franzosen / Russische Systeme im Bekaa Tal, die von Israelis mit gründlciher Vorbereitung total besiegt wurden). 3. Die Russen haben's einfach nötig, finanziell. 4. Es gibt seit langer Zeit schon keine neuen Anzeichen für "monkey model" Exportversionen russischer Rüstungsgüter mehr. Die gehen wohl korrekt davon aus, dass der Westen inzwischen an technische Geheimnisse auch durch Spionage in Russland selbst herankommen kann. "Exportversionen" sind teils sogar aufwändisger als von Russland selbst angeschaffte Versionen, oder (Fall Indien) nach Kundenwunsch maßgeschneidert. Die S-400 Version mit der größten Reichweite ist übrigens gegen Radarflugzeuge relevant. E-3 AWACS kann moderne Kampfflugzeuge kaum auf 200-300 km aufspüren und verfolgen, bei Präsenz von S-400 wird AWACS also soweit nach hinten gedrückt, dass es zum rein defensiven System würde. Angreifende Kampfflugzeuge müssten sich dann gegenseitig mit ihren vorausgerichteten Radars sichern (was sehr viele Maschinen erfordert). Eine Antwort hierauf wäre der berühmte "Stealth" Ansatz, doch der funktioniert bei sehr langwelligen Radarfrequenzen nicht gut, und S-400 nutzt das aus. Der Westen hat (soweit öffentlich bekannt) nicht einmal eine wirklich gute Antwort auf das S-400 Vorgängersystem S-300 in seinen neueren Versionen. Sobald irgendwo S-300 / S-400 rumsteht ist das typisch amerkanische rumrempeln mit "cruise missile diplomacy" offenbar nicht mehr möglich. #### Seid ihr schon mal vom Support nach eurem Plattenkrypto-Passwort ... Seid ihr schon mal vom Support nach eurem Plattenkrypto-Passwort gefragt worden? Ich auch nicht. Ich baue aber auch die Platten eh immer aus, bevor ich ein Gerät irgendwo einschicke. Hier ist mal ein Erfahrungsbericht, der per Mail eingegangen ist: Wie wäre es, wenn man einfach mal direkt nach Administratorpasswörtern fragt? Ist sowieso viel einfacher. Ich wurde gestern von einem Mitarbeiter bei einem Apple Certified Service Provider nach meinem Passwort gefragt, damit der Techniker meine Platte entschlüsseln kann. Ohne die Daten auf der Festplatte, so sagte man mir, würde Apple nicht überprüfen können, ob die Hardware in Ordnung ist (???). Ja klar, das ist ganz normal. Das wird alles durchs Internet zu Apple geschickt. Die machen da nichts schlimmes mit. Das ist alles zu meiner eigenen Sicherheit. Der Grund dafür war ein loses Teil im Gehäuse. Also der Techniker musste quasi ne Schraube festziehen. Diese Spezialschrauben sind anscheinend so komplex, dass man definitiv uneingeschränkten Zugriff auf alles braucht und auch schauen muss, ob die Daten auf der Platte so sind, wie Apple das möchte. #### Neulich in New York: Schwedische Cops im Urlaub in ... Neulich in New York: Schwedische Cops im Urlaub in der New Yorker U-Bahn zeigen den Amis, wie man eine Schlägerei ohne Schusswaffen beendet. Die New Yorker sind jetzt ein bisschen verstört, dass die Situation so ganz ohne tote Schwarze deeskaliert werden konnte. #### Eines der für mich faszinierensten Themen ist, wie ... Eines der für mich faszinierensten Themen ist, wie Menschen mit Schuld umgehen. Nehmt nur mal diesen Fall hier. Eine australische Gesundheits-Bloggerin, die in ihrem Blog beschrieben hat, wie sie mit gesunder Diät ihren Krebs besiegt hat. Die hat einen Buch-Deal gekriegt, mehrere 100k Spenden, Apple hat sich nach Cupertino eingeladen (sie hat auch eine populäre App aus ihrem Diät-Krams gemacht), ... und dann kam raus, dass sie eher an Unehrlichkeit als an Krebs leitet. Alles erstunken und erlogen. Die Kohle weg. Der Buchdeal ist dann natürlich auch geplatzt, die App ist nicht mehr im App Store. Gute Gelegenheit für ein bisschen Selbstreflektion, würde man denken. Und was hat sie zu sagen? "I just think [speaking out] was the responsible thing to do. Above anything, I would like people to say, 'Okay, she’s human. She’s obviously had a big life. She’s respectfully come to the table and said what she’s needed to say, and now it’s time for her to grow and heal.'" Mit anderen Worten: Sie sieht sich hier noch als Heldin der Geschichte. Das war ja bloß menschlich, und schaut her, ich bin hergekommen und habe gestanden! Jetzt muss ich erwachsener werden und heilen. Ja, heilen! Sagt die Frau, die mit falschen Krebsansagen hunderttausende von Dollars an Spenden … ja was sagt man da, hinterzogen hat? Die Polizei hat sich übrigens entschieden, nicht zu ermitteln. Man stelle sich mal vor, Bernie Madoff stellt sich vor Gericht hin und sagt: schaut her, ich habe gestanden, jetzt ist es Zeit für mich, erwachsen zu werden und zu heilen. WTF? #### Bei Gedenkfeiern ist ja Authentizität wichtig. Aber ... Bei Gedenkfeiern ist ja Authentizität wichtig. Aber man kann es auch übertreiben. #### Kennt ihr das Problem? Beerdigung und kaum jemand kommt? ... Kennt ihr das Problem? Beerdigung und kaum jemand kommt? Das ist besonders im chinesischen Kulturkreis unschön, denn dort gilt es als ein glückliches Omen für das Leben nach dem Tod, wenn viele Trauergäste kommen. Was also tun? Na klar! Stripper kommen lassen! Die chinesische Regierung ist unzufrieden und geht jetzt dagegen vor. Das Phänomen scheint aber schon älter zu sein und es auch bis Taiwan geschafft zu haben. ### Lobsters #### Your template language is killing the web. ### QuantOverflow #### Time series analysis on illiquid price data? Say for example I have the following company in some specialized industry: A - Company that is about to be listed in Exchange 1, i.e., no price history B - Company that produce similar products as Company A, listed on Exchange 1 as well, however B has very thin volume and price could stay the same for weeks. C - Similar to company A but listed on Exchange 2, again, thin trading volume D - Similar to company B but listed on Exchange 2, also thin trading volume For companies B, C and D, I have their historical EOD price for the past two years. Exchange 1 and 2 are listed in different continents and there is very little correlation between the two, also, there is no index for this industrial sector. (However the price between company C&D and A&B should be correlated). Also, we can not assume the price time series is non-stationary as the products those company produce could be seasonal in nature. I would like to figure out the "correct" market price for Company A before it is listed, based on the information above. And my results so far shows that each of the price time series that I have has a different ARIMA model. Therefore, my question is how can I tackle these price data to start my analysis? Bearing in mind that those are all the data I have and can get. ### StackOverflow #### How do i navigate through a nested array-map that contains vectors in clojure I have the following array-map created in Clojure. {:node 7, :children [{:node 8, :children []} {:node 6, :children []} {:node 23, :children {}} {:node 43, :children []}]}  How do i go about adding elements into this, running the following code (def tree (assoc-in tree [:node] 12))  gives me {:node 12, :children [{:node 8, :children []} {:node 6, :children []} {:node 10, :children {}} {:node 13, :children []} {:node 28, :children []}]}  and running (def tree (assoc-in tree [:node :children] 12))  gives me the following error message. how do i add elements into the children sections on the array-map Exception in thread "main" java.lang.ClassCastException: java.lang.Long cannot be cast to clojure.lang.Associative,  ### CompsciOverflow #### Graph build for static analysis I'm new to world of static analysis and am trying to build a new analysis for llvm compiler. Though, I feel that I have some theoretical gaps on the field and find it difficult to proceed. Plus I cannot find any useful practical tutorial of how to build an analysis on the web. (I'd really appreciate if you had any to pass on me). I'm on the phase of building the graph. But I deal with some questions that I don't have any sure answers: 1. Do I need really the symbol table or can I skip it and add the nodes straight on the graph? I developed a symbol table to label every variable on its scope. Could it be possible to skip it and add the types straight on the graph? 2. Speaking about the graph, doesn't it depend on the nature of analysis? Does a simple control flow graph satisfies every analysis? Isn't it different when we have a control-flow analysis versus a data flow analysis? 3. Is there any useful material of how to build an analysis on top of LLVM or generally one? (Or any material for building a graph for it) Thanks in advance :) ### TheoryOverflow #### Is there any special property the resulting graph G' has? Undirected graph G can be partitioned into several vertex blocks, each vertex pair (u,v) has an edge if "u" and "v" are in the different blocks; no edge, otherwise. Question is: Suppose we have several such graphs G_1,\ldots,G_n, where a vertex v_i may be in more than one G_j. If we combine graphs G_1,\ldots,G_n by taking the union of their edge sets to get a graph G', as shown in the example below, what is the name of the resulting graph? Does G' have any special properties for the vertex cover problem or some other classical problems? ### CompsciOverflow #### What do we know about NP ∩ co-NP and its relation to NPI? A TA dropped by today to inquire some things about NP and co-NP. We arrived at a point where I was stumped, too: what does a Venn diagram of P, NPI, NP, and co-NP look like assuming P ≠ NP (the other case is boring)? There seem to be four basic options. 1. NP ∩ co-NP = P In particular, co-NPI ∩ NPI = ∅ 2. NP ∩ co-NP = P ∪ NPI In particular, co-NPI = NPI? 3. NP ∩ co-NP ⊃ P ∪ NPI ∪ co-NPI A follow-up question in this case is how NPC and co-NPC are related; is there an overlap? 4. Something else, that is in particular some problems from NPI are in co-NP and others are not. Do we know which is right, or at least which can not be true? The complexity zoo entries for NPI and NP ∩ co-NP do not inspire much hope that anything is known, but I'm not really fluent enough in complexity theory to comprehend all the other classes (and their impact on this question) floating around there. ### QuantOverflow #### How to estimate CVA by valuing a CDS of the counterparty? I'm trying to estimate CVA of one of my derivatives by valuing a credit default swap (CDS) of my counterparty. However, I don't know how to set up the CDS deal (notional amount, maturity, etc.). Thanks! ### /r/netsec #### Klikki Oy - WordPress 4.2 Stored XSS ### StackOverflow #### heroku permission denied on netty start I have deployed a play scala/java web app on heroku that uses an embeded netty server. The Procfile command is web: target/universal/stage/bin/mmbu-timesheets -Dhttp.port=80 Taken from heroku logs: 2015-04-27T13:49:27.160198+00:00 heroku[web.1]: Starting process with command target/universal/stage/bin/mmbu-timesheets -Dhttp.port=80 2015-04-27T13:49:29.321552+00:00 app[web.1]: Picked up JAVA_TOOL_OPTIONS: -Xmx384m -Xss512k -Dfile.encoding=UTF-8 -Djava.rmi.server.useCodebaseOnly=true 2015-04-27T13:49:29.898379+00:00 app[web.1]: Play server process ID is 3 2015-04-27T13:49:32.549840+00:00 app[web.1]: [[37minfo[0m] application - mongodb connection ds031701.mongolab.com:31701 db->heroku_app36286493 2015-04-27T13:49:33.374954+00:00 app[web.1]: [[37minfo[0m] play - Application started (Prod) 2015-04-27T13:49:33.628067+00:00 app[web.1]: Oops, cannot start the server. 2015-04-27T13:49:33.630568+00:00 app[web.1]: at play.core.server.NettyServeranonfun8.apply(NettyServer.scala:89) 2015-04-27T13:49:33.630523+00:00 app[web.1]: at play.core.server.NettyServeranonfun8.apply(NettyServer.scala:92) 2015-04-27T13:49:33.630859+00:00 app[web.1]: at play.core.server.NettyServer.createServer(NettyServer.scala:206) 2015-04-27T13:49:33.630358+00:00 app[web.1]: org.jboss.netty.channel.ChannelException: Failed to bind to: /0.0.0.0:80 2015-04-27T13:49:33.630483+00:00 app[web.1]: at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272) 2015-04-27T13:49:33.630606+00:00 app[web.1]: at scala.Option.map(Option.scala:146) 2015-04-27T13:49:33.630802+00:00 app[web.1]: at play.core.server.NettyServer.<init>(NettyServer.scala:89) 2015-04-27T13:49:33.630898+00:00 app[web.1]: at play.core.server.NettyServeranonfunmain3.apply(NettyServer.scala:243) 2015-04-27T13:49:33.630941+00:00 app[web.1]: at play.core.server.NettyServeranonfunmain3.apply(NettyServer.scala:238) 2015-04-27T13:49:33.631096+00:00 app[web.1]: at play.core.server.NettyServer.main(NettyServer.scala:238) 2015-04-27T13:49:33.630975+00:00 app[web.1]: at scala.Option.map(Option.scala:146) 2015-04-27T13:49:33.631150+00:00 app[web.1]: at play.core.server.NettyServer.main(NettyServer.scala) 2015-04-27T13:49:33.633017+00:00 app[web.1]: Caused by: java.net.SocketException: Permission denied 2015-04-27T13:49:33.633065+00:00 app[web.1]: at sun.nio.ch.Net.bind0(Native Method) 2015-04-27T13:49:33.633136+00:00 app[web.1]: at sun.nio.ch.Net.bind(Net.java:437) 2015-04-27T13:49:33.633179+00:00 app[web.1]: at sun.nio.ch.Net.bind(Net.java:429) 2015-04-27T13:49:33.633235+00:00 app[web.1]: at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) 2015-04-27T13:49:33.633396+00:00 app[web.1]: at org.jboss.netty.channel.socket.nio.NioServerBossRegisterTask.run(NioServerBoss.java:193) 2015-04-27T13:49:33.633444+00:00 app[web.1]: at org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:372) 2015-04-27T13:49:33.633550+00:00 app[web.1]: at org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42) 2015-04-27T13:49:33.633600+00:00 app[web.1]: at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 2015-04-27T13:49:33.633643+00:00 app[web.1]: at java.util.concurrent.ThreadPoolExecutorWorker.run(ThreadPoolExecutor.java:617) 2015-04-27T13:49:33.633305+00:00 app[web.1]: at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) 2015-04-27T13:49:33.633525+00:00 app[web.1]: at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:296) 2015-04-27T13:49:33.633703+00:00 app[web.1]: at java.lang.Thread.run(Thread.java:745) 2015-04-27T13:49:34.926567+00:00 heroku[web.1]: Process exited with status 255 2015-04-27T13:49:34.945535+00:00 heroku[web.1]: State changed from starting to crashed  I get "Permission denied" exception. Is there a way to start the netty server in heroku? ### QuantOverflow #### forecasting trading costs with end of day data I am trying to create a model that forecasts trading costs (using end of day data, so no intra day data). My trading cost (also called Implemented Shortfall (IS) is defined as such for a single stock, IS = (vwap - open) / open  for the market as a whole, IS = abs(IS_single_stock - IS_market_median)  Variables that I am looking at include a companies market cap, the daily spread, vwap, volume & a liquidity measure called liq_m. Doing a simple linear regression of each variable against IS produced very low r-squares, below 0.1. Combining the variables did very little to to improve the results. The residual plots appear to have some pattern, the one below is similar for most of the variables, this is mcap vs IS residuals. The normal probability plot also highlights that the residuals are not linear & have a left skew. In the literal I have read on implemented shortfall all the models are non linear models so this is not unexpected. I am unsure though of how to proceed next i.e. how to select an appropriate non linear model for testing? The end goal is to have a model that allows me to forecast the cost of trading a certain company. Below are two more plots. One is the daily plot of mcaps over time - a mean is used to calculate the mcap of the 100 companies used in the sample. Beneath that is the Implemented Shortfall again a mean is used in the plot. ### StackOverflow #### StackOverflowError for coin change in Scala? I'm writing a recursive function for the Coin (change) problem in Scala. My implementation breaks with StackOverflowError and I can't figure out why it happens. Exception in thread "main" java.lang.StackOverflowError at scala.collection.immutable.coloncolon.tail(List.scala:358) at scala.collection.immutable.coloncolon.tail(List.scala:356) at recfun.Main.recurs1(Main.scala:58) // repeat this line over and over  this is my call:  println(countChange(20, List(1,5,10)))  this is my definition: def countChange(money: Int, coins: List[Int]): Int = { def recurs(money: Int, coins: List[Int], combos: Int): Int = { if (coins.isEmpty) combos else if (money==0) combos + 1 else recurs(money,coins.tail,combos+1) + recurs(money-coins.head,coins,combos+1) } recurs(money, coins, 0) }  Edit: I just added the else if statement in the mix: else if(money<0) combos  it got rid of the error but my output is 1500 something :( what is wrong with my logic? ### /r/compsci #### Artificial Intelligence and Atari - Hill Climbing Algorithms with Fishing Derby ### StackOverflow #### Constructing TypeTags of higher-kinded types Given a simple parametrized type like class LK[A], I can write // or simpler def tagLK[A: TypeTag] = typeTag[LK[A]] def tagLK[A](implicit tA: TypeTag[A]) = typeTag[LK[A]] tagLK[Int] == typeTag[LK[Int]] // true  Now I'd like to write an analogue for class HK[F[_], A]: def tagHK[F[_], A](implicit ???) = typeTag[HK[F, A]] // or some other implementation? tagHK[Option, Int] == typeTag[HK[Option, Int]]  Is this possible? I've tried def tagHK[F[_], A](implicit tF: TypeTag[F[_]], tA: TypeTag[A]) = typeTag[HK[F, A]] def tagHK[F[_], A](implicit tF: TypeTag[F], tA: TypeTag[A]) = typeTag[HK[F, A]]  but neither works for the obvious reasons (in the first case F[_] is the existential type instead of the higher-kinded one, in the second TypeTag[F] doesn't compile). I suspect the answer is "it's impossible", but would be very happy if it isn't. #### Clojure tree from list of edges Using Clojure I currently have a list of edges in the format {:a 13, :b 29, :cost 2, :children {}}what would be the best way to create a tree from this list? #### Why does IDEA report "Error:scalac: error while loading Object, Missing dependency 'object scala in compiler mirror'" building scala breeze? The breeze project builds fine from command line sbt: sbt package ... info] Done packaging. [info] Packaging /shared/breeze/viz/target/scala-2.11/breeze-viz_2.11-0.11-SNAPSHOT.jar ... [info] Done packaging. [success] Total time: 238 s, completed Jan 27, 2015 9:40:03 AM  However , the following error comes up repeatedly when doing Build|Rebuild Project in IntelliJ IDEA 14: Error:scalac: error while loading Object, Missing dependency 'object scala in compiler mirror', required by /Library/Java/JavaVirtualMachines/jdk1.7.0_25.jdk/Contents/Home/jre/lib/rt.jar(java/lang/Object.class)  Here is the full stacktrace: Error:scalac: Error: scala.tools.nsc.typechecker.NamersNamer.enterExistingSym(Lscala/reflect/internal/SymbolsSymbol;)Lscala/tools/nsc/typechecker/ContextsContext; java.lang.NoSuchMethodError: scala.tools.nsc.typechecker.NamersNamer.enterExistingSym(Lscala/reflect/internal/SymbolsSymbol;)Lscala/tools/nsc/typechecker/ContextsContext; at org.scalamacros.paradise.typechecker.NamersNamerclass.enterSym(Namers.scala:41) at org.scalamacros.paradise.typechecker.Namersanon3.enterSym(Namers.scala:13) at org.scalamacros.paradise.typechecker.AnalyzerPluginsMacroPlugin.pluginsEnterSym(AnalyzerPlugins.scala:35) at scala.tools.nsc.typechecker.AnalyzerPluginsanon13.custom(AnalyzerPlugins.scala:429) at scala.tools.nsc.typechecker.AnalyzerPluginsanonfun2.apply(AnalyzerPlugins.scala:371) at scala.tools.nsc.typechecker.AnalyzerPluginsanonfun2.apply(AnalyzerPlugins.scala:371) at scala.collection.immutable.List.map(List.scala:273) at scala.tools.nsc.typechecker.AnalyzerPluginsclass.invoke(AnalyzerPlugins.scala:371) at scala.tools.nsc.typechecker.AnalyzerPluginsclass.pluginsEnterSym(AnalyzerPlugins.scala:423) at scala.tools.nsc.Globalanon1.pluginsEnterSym(Global.scala:463) at scala.tools.nsc.typechecker.NamersNamer.enterSym(Namers.scala:274) at scala.tools.nsc.typechecker.NamersNameranonfunenterSyms1.apply(Namers.scala:500) at scala.tools.nsc.typechecker.NamersNameranonfunenterSyms1.apply(Namers.scala:499) at scala.collection.LinearSeqOptimizedclass.foldLeft(LinearSeqOptimized.scala:124) at scala.collection.immutable.List.foldLeft(List.scala:84) at scala.tools.nsc.typechecker.NamersNamer.enterSyms(Namers.scala:499) at scala.tools.nsc.typechecker.NamersNamer.templateSig(Namers.scala:925) at scala.tools.nsc.typechecker.NamersNamer.moduleSig(Namers.scala:989) at scala.tools.nsc.typechecker.NamersNamer.getSig1(Namers.scala:1526) at scala.tools.nsc.typechecker.NamersNamer.typeSig(Namers.scala:1541) at scala.tools.nsc.typechecker.NamersNameranonfunmonoTypeCompleter1anonfunapply1.applymcVsp(Namers.scala:781) at scala.tools.nsc.typechecker.NamersNameranonfunmonoTypeCompleter1anonfunapply1.apply(Namers.scala:780) at scala.tools.nsc.typechecker.NamersNameranonfunmonoTypeCompleter1anonfunapply1.apply(Namers.scala:780) at scala.tools.nsc.typechecker.NamersNamer.scalatoolsnsctypecheckerNamersNamerlogAndValidate(Namers.scala:1568) at scala.tools.nsc.typechecker.NamersNameranonfunmonoTypeCompleter1.apply(Namers.scala:780) at scala.tools.nsc.typechecker.NamersNameranonfunmonoTypeCompleter1.apply(Namers.scala:772) at scala.tools.nsc.typechecker.Namersanon1.completeImpl(Namers.scala:1684) at scala.tools.nsc.typechecker.NamersLockingTypeCompleterclass.complete(Namers.scala:1692) at scala.tools.nsc.typechecker.Namersanon1.complete(Namers.scala:1682) at scala.reflect.internal.SymbolsSymbol.info(Symbols.scala:1483) at scala.reflect.internal.SymbolTable.openPackageModule(SymbolTable.scala:286) at scala.tools.nsc.typechecker.AnalyzerpackageObjectsanon2anon4.traverse(Analyzer.scala:63) at scala.tools.nsc.typechecker.AnalyzerpackageObjectsanon2anon4.traverse(Analyzer.scala:59) at scala.reflect.api.TreesTraverseranonfuntraverseStats1anonfunapply1.applymcVsp(Trees.scala:2498) at scala.reflect.api.TreesTraverser.atOwner(Trees.scala:2507) at scala.reflect.api.TreesTraverser.traverseStats(Trees.scala:2497) at scala.reflect.internal.Treesclass.itraverse(Trees.scala:1326) at scala.reflect.internal.SymbolTable.itraverse(SymbolTable.scala:16) at scala.reflect.internal.SymbolTable.itraverse(SymbolTable.scala:16) at scala.reflect.api.TreesTraverser.traverse(Trees.scala:2475) at scala.tools.nsc.typechecker.AnalyzerpackageObjectsanon2anon4.traverse(Analyzer.scala:66) at scala.tools.nsc.typechecker.AnalyzerpackageObjectsanon2anon4.traverse(Analyzer.scala:59) at scala.reflect.api.TreesTraverseranonfuntraverseStats1anonfunapply1.applymcVsp(Trees.scala:2498) at scala.reflect.api.TreesTraverser.atOwner(Trees.scala:2507) at scala.reflect.api.TreesTraverser.traverseStats(Trees.scala:2497) at scala.reflect.internal.Treesclass.itraverse(Trees.scala:1326) at scala.reflect.internal.SymbolTable.itraverse(SymbolTable.scala:16) at scala.reflect.internal.SymbolTable.itraverse(SymbolTable.scala:16) at scala.reflect.api.TreesTraverser.traverse(Trees.scala:2475) at scala.tools.nsc.typechecker.AnalyzerpackageObjectsanon2anon4.traverse(Analyzer.scala:66) at scala.tools.nsc.typechecker.AnalyzerpackageObjectsanon2anon4.traverse(Analyzer.scala:59) at scala.reflect.api.TreesTraverser.apply(Trees.scala:2513) at scala.tools.nsc.typechecker.AnalyzerpackageObjectsanon2.apply(Analyzer.scala:71) at scala.tools.nsc.GlobalGlobalPhaseanonfunapplyPhase1.applymcVsp(Global.scala:441) at scala.tools.nsc.GlobalGlobalPhase.withCurrentUnit(Global.scala:432) at scala.tools.nsc.GlobalGlobalPhase.applyPhase(Global.scala:441) at scala.tools.nsc.GlobalGlobalPhaseanonfunrun1.apply(Global.scala:399) at scala.tools.nsc.GlobalGlobalPhaseanonfunrun1.apply(Global.scala:399) at scala.collection.Iteratorclass.foreach(Iterator.scala:750) at scala.collection.AbstractIterator.foreach(Iterator.scala:1202) at scala.tools.nsc.GlobalGlobalPhase.run(Global.scala:399) at scala.tools.nsc.GlobalRun.compileUnitsInternal(Global.scala:1500) at scala.tools.nsc.GlobalRun.compileUnits(Global.scala:1487) at scala.tools.nsc.GlobalRun.compileSources(Global.scala:1482) at scala.tools.nsc.GlobalRun.compile(Global.scala:1580) at xsbt.CachedCompiler0.run(CompilerInterface.scala:126) at xsbt.CachedCompiler0.run(CompilerInterface.scala:102) at xsbt.CompilerInterface.run(CompilerInterface.scala:27) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at sbt.compiler.AnalyzingCompiler.call(AnalyzingCompiler.scala:102) at sbt.compiler.AnalyzingCompiler.compile(AnalyzingCompiler.scala:48) at sbt.compiler.AnalyzingCompiler.compile(AnalyzingCompiler.scala:41) at org.jetbrains.jps.incremental.scala.local.IdeaIncrementalCompiler.compile(IdeaIncrementalCompiler.scala:29) at org.jetbrains.jps.incremental.scala.local.LocalServer.compile(LocalServer.scala:26) at org.jetbrains.jps.incremental.scala.remote.Main.make(Main.scala:65) at org.jetbrains.jps.incremental.scala.remote.Main.nailMain(Main.scala:23) at org.jetbrains.jps.incremental.scala.remote.Main.nailMain(Main.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.martiansoftware.nailgun.NGSession.run(NGSession.java:319)  #### Json object (de)serialization. Typed languages with class hierarchy I encountered a quite common problem with JSONs in web client-server application. Context: scala (could be any typed language with inheritance), typescript+angularjs, json representation in NoSql postgresql, same json sent to web client. How can I add type to json representation to get features such as: 1. describes generics like List 2. enables easy usage of inheritance in javascript/typescript world 3. can be deserialized line by line (like SAX) by json deserializer to get fast transformation with minimum memory used Adding attribute to object like {myList: (...) , (...), type: ?? } interferes with point 3 due to no guarantee of attributes order. Adding type as attribute name List#Integer#: {myList: (...) , (...)} makes the code ugly on client side due to additional wrapper/prefix everywhere. How to solve this problem? Maybe somebody knows of Scala json library that already supports types? Many libraries just assume that you know what type you are loading... #### scala filter by type I have read TypeTag related article, but I am unable to realize filter a collection by elements type. Example: trait A class B extends A class C extends A val v = Vector(new B,new C) v filter ( _.isInstanceOf[B] )  The code above works fine. However I want to extract filter out of v. E.g. def filter[T,T2](data:Traversable[T2]) = (data filter ( _.isInstanceOf[T])).asInstanceOf[Traversable[T]] //Then filter v by filter[B,A](v)  In this case I get warning abstract type T is unchecked since it is eliminated by erasure. I tried to use TypeTag, but it seems not easy to get Type on runtime. Is there any elegant solution to realize function filter? Any solution via scala macro is also acceptable. #### Functional style of Java 8's Optional.ifPresent and if-not-Present? in java 8 , I want to do something to an optional object if it is present , and do another thing if it is not present. if (opt.isPresent()) System.out.println("found"); else System.out.println("Not found");  But I think it is not so 'function style' Optional has an 'ifPresent' method , but unable to chain a 'orElse' method. So I cannot write : opt.ifPresent( x -> System.out.println("found " + x)) .orElse( System.out.println("NOT FOUND"));  Is there any other way ? ============================================= Thanks @assylias , but I don't think Optional.map() works for the following case : opt.map( o -> { System.out.println("while opt is present..."); o.setProperty(xxx); dao.update(o); return null; }).orElseGet( () -> { System.out.println("create new obj"); dao.save(new obj); return null; });  In this case , when opt is present , I update its property and save to db; while it is not available , I create a new obj and save to db. Note in the two lambdas , I have to return null. But when opt is truly present , both lambdas will be executed. object will be updated , and a new object will be saved to db . This is because the return null in the first lambda. And orElseGet() will continue to execute. ### High Scalability #### How can we Build Better Complex Systems? Containers, Microservices, and Continuous Delivery. We must be able to create better complex software systems. That’s that message from Mary Poppendieck in a wonderful far ranging talk she gave at the Craft Conference: New New Software Development Game: Containers, Micro Services. The driving insight is complexity grows nonlinearly with size. The type of system doesn’t really matter, but we know software size will continue to grow so software complexity will continue to grow even faster. What can we do about it? The running themes are lowering friction and limiting risk: • Lower friction. This allows change to happen faster. Methods: dump the centralizing database; adopt microservices; use containers; better organize teams. • Limit risk. Risk is inherent in complex systems. Methods: PACT testing; continuous delivery. Some key points: • When does software really grow? When smart people can do their own thing without worrying about their impact on others. This argues for building federated systems that ensure isolation, which argues for using microservices and containers. • Microservices usually grow successfully from monoliths. In creating a monolith developers learn how to properly partition a system. • Continuous delivery both lowers friction and lowers risk. In a complex system if you want stability, if you want security, if you want reliability, if you want safety then you must have lots of little deployments. • Every member of a team is aware of everything. That's what makes a winning team. Good situational awareness. The highlight of the talk for me was the section on the amazing design of the Swedish Gripen Fighter Jet. Talks on microservices tend to be highly abstract. The fun of software is in the building. Talk about parts can be so nebulous. With the Gripen the federated design of the jet as a System of Systems becomes glaringly concrete and real. If you can replace your guns, radar system, and virtually any other component without impacting the rest of the system, that’s something! Mary really brings this part of the talk home. Don’t miss it. It’s a very rich and nuanced talk, there’s a lot history and context given, so I can’t capture all the details, watching the video is well worth the effort. Having said that, here’s my gloss on the talk... ## Hardware Scales by Abstraction and Miniaturization ### StackOverflow #### Slick 3.0-RC3 fails with java.util.concurrent.RejectedExecutionException I'm trying to get familiar with Slick 3.0 and Futures (using Scala 2.11.6). I use simple code based on Slick's Multi-DB Cake Pattern example. Why does the following code terminate with an exception and how to fix it? import scala.concurrent.Await import scala.concurrent.duration._ import slick.jdbc.JdbcBackend.Database import scala.concurrent.ExecutionContext.Implicits.global class Dispatcher(db: Database, dal: DAL) { import dal.driver.api._ def init() = { db.run(dal.create) try db.run(dal.stuffTable += Stuff(23,"hi")) finally db.close val x = { try db.run(dal.stuffTable.filter(_.serial === 23).result) finally db.close } // This crashes: val result = Await.result(x, 2 seconds) } }  Execution fails with: java.util.concurrent.RejectedExecutionException: Task slick.backend.DatabaseComponentDatabaseDefanon2@5c73f637 rejected from java.util.concurrent.ThreadPoolExecutor@4129c44c[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 2] at java.util.concurrent.ThreadPoolExecutorAbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372) at scala.concurrent.impl.ExecutionContextImplanon1.execute(ExecutionContextImpl.scala:136) at slick.backend.DatabaseComponentDatabaseDefclass.runSynchronousDatabaseAction(DatabaseComponent.scala:224) at slick.jdbc.JdbcBackendDatabaseDef.runSynchronousDatabaseAction(JdbcBackend.scala:38) at slick.backend.DatabaseComponentDatabaseDefclass.runInContext(DatabaseComponent.scala:201) at slick.jdbc.JdbcBackendDatabaseDef.runInContext(JdbcBackend.scala:38) at slick.backend.DatabaseComponentDatabaseDefclass.runInternal(DatabaseComponent.scala:75) at slick.jdbc.JdbcBackendDatabaseDef.runInternal(JdbcBackend.scala:38) at slick.backend.DatabaseComponentDatabaseDefclass.run(DatabaseComponent.scala:72) at slick.jdbc.JdbcBackendDatabaseDef.run(JdbcBackend.scala:38) at Dispatcher.init(Dispatcher.scala:15) at SlickDemo.main(SlickDemo.scala:16) at SlickDemo.main(SlickDemo.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606)  ### TheoryOverflow #### Is unary factoring easier than binary factoring? [on hold] I've been studying idea in complexity for a little while now, but I realized that I don't actually know what factoring is a difficult problem- naively it looked like it was a linear algorithm. (Simply running through all the numbers less than a given number, to see if they are factors.) This isn't linear, however, because the inputs are encoded in binary. So then is factoring in unary an 'easy' problem? I've often struggled to understand how the mere fact that something is encoded in binary seems to give way to its computational difficulty- this is also the case with Knapsack, and the analogous Unary Knapsack, correct? ### StackOverflow #### Capturing comma separated list as list through regex in scala I have input in the following format: "DataType: FieldName1, Fieldname2,FieldName3"  Where you can have 1 or more field names. So for example: User: Name, Address Person: Age, Address,DOB  I'm trying to capture the DataType in a string and the fields in an array using scala group capture, this is what I have till now: val dataTypeAndFieldsRegex = """(.+):(.*(,.*)?)""".r "Person: Age, Address, DOB" match { case dataTypeAndFieldsRegex(dataType, fields, _*) => { println("dataType: " + dataType) println("fields: " + fields) }  The problem is that fields here is a string. How can I capture the fields as an array? ### /r/netsec #### TeslaCrypt – Decrypt It Yourself ### CompsciOverflow #### Can we build a nondeterministic decider PDA using two PDAs accepting a language and its complement? When talking about turing machines, it can be easily shown that starting from two machines accepting L and its complement L^c, one can build a machine which can fully decide if a word is inside L or not. But what about PDAs? starting from two different PDAs, one accepting L and one accepting L^c can we build another PDA, which accepts L, and only crashes or halts in non-final states (rejects) when w\notin L? ### StackOverflow #### Ansible: how to continue execution on failed task after fixing error in playbook? When writing and debugging Ansible playbooks, typical workflow is as follows: 1. ansible-playbook ./main.yaml 2. Playbook fails on some task 3. Fix this task and repeat line 1, waiting for all previous tasks to execute again. Which takes a lot of time Ideally, i'd like to resume execution on failed task, having inventory and all facts collected by previous tasks. Is it even possible? How to make playbook writing/debugging faster? ### QuantOverflow #### Are CME security id's unique and constant over time? For any given day, CME security IDs are unique - a number will always refer to a single product. Are they unique over time as well? That is, might a new security have a security id that used to be used by an expired one? And are they constant? That is, does a given security keep the same security id over its lifetime? ### StackOverflow #### idiomatic lazy atoms in clojure I am playing a bit with atoms in clojure. I have an atom pointing at a lazy-seq. In another bit of code I want to update the value of the atom to the result of doing next on the sequence, but given that both swap! and reset! return the updated value execution never ends. I figured out that I could always wrap the call to swap!, reset! in a do statement and then return nil, but I am wondering how idiomatic this is or whether there is an alternative solution to do it. Doesn't terminate: (def x (atom (range))) (swap! x next)  Terminates (def x (atom (range))) (do (swap! x next) nil) (first @x) ;1 (do (swap! x next) nil) (first @x) ;2  ### Lobsters #### This Web App Best Viewed By Someone Else ### CompsciOverflow #### Determining which states in a transition system satisfy a specific CTL formula Trying to answer the following question: However, my answer is that only one of these states satisfy the TS (which is for sure wrong since the next part of this question asks to remove states that don't satisfy the formula and compute the new TS). Reasoning follows: 1 - does not satisfy the formula since if you go to 4, EXISTS NEXT c is violated 2 - does not satisfy the formula since if you go to 4, EXISTS NEXT c is violated 3 - does not satisfy the formula since if you go to 1, EXISTS NEXT c is violated 4 - satisfies the formula since all paths satisfy EXISTS NEXT c 5 - does not satisfy the formula since if you go to 6, EXISTS NEXT c is violated 6 - does not satisfy the formula since if you go to 4, EXISTS NEXT c is violated Can anyone see where I have gone wrong with my reasoning? Something else I'm not sure about is for example if we take 4, it is satisfied since all paths lead to other states that (together) satisfy the equation. Do we need to include these 'other states' in the satisfaction set? Really grateful for any help. ### Lobsters #### HTTP Responder: debug and stub webhooks like a pro. ### /r/clojure #### compojure-api releases a new version with Swagger 2 support ### Lobsters #### Writing better API tests #### Folding Erlang Records #### The #1 reason I don’t write Haskell #### On Haskell, Ruby, and Cards Against Humanity ### QuantOverflow #### Equity Chart - design and granularity I am looking to build a web based Equity chart to display performance of FX trading strategies. I would like to hear opinions and advice on a few areas that I am unsure about. Granularity Equity can - and typically does - change every tick. Should I therefore save equity every tick? If I do I am likely to be saving a lot of data! And then the display of this data will also be a challenge - as there would be a lot of noise. If I am to save a snapshot every moment in time, what would be a recommend timeframe? Every minute? Optimizing for download As the amount of data in the equity chart could be quite large, what are some recommend approaches to optimize for download? Would it be advisable to somehow smooth the equity curve and download just a vector line rather than downloading a csv/json with many thousands of datapoints? Thanks for any feedback - its really appreciated. ### AWS #### AWS Week in Review – April 20, 2015 Let’s take a quick look at what happened in AWS-land last week:  Monday, April 20 We announced the NOAA Big Data Project. The AWS Java Blog discussed the AWS Toolkit for Eclipse Integration with AWS OpsWorks. The AWS Architecture Blog continued with the re:viewing re:Invent series with a focus on the architecture track. The Cloud Academy Blog discussed Virtual Private Clouds and the AWS Solutions Architect Exam. Tuesday, April 21 We updated the AWS SDK for Android. The AWS Startup Collection published an IoT Primer: Telemetry on the Cloud. The Cloudlytics Blog invited you to See What’s New in the March Release of ELB Reports. The 2nd Watch Blog talked about Digital Marketing in the Cloud. The Cloud Academy Blog talked about Elastic File System – What You Need to Know About Amazon’s New Service. Wednesday, April 22 We posted 54 videos from the AWS Summit in San Francisco. The AWS Security Blog announced the IAM Last Access Key Used feature. The 2nd Watch Blog talked about Websites & Web Hosting in the Cloud for Digital Marketing. The Cloud Academy Blog reviewed Amazon Storage Options. Thursday, April 23 We announced that Registration for re:Invent 2015 Opens Soon. We announced a VM Import Update, making it Faster and More Flexible, with Multi-Volume Support. The AWS Startup Collection talked about Cluster-Based Architectures Using Docker and Amazon EC2 Container Service. The Jimdo Dev Blog talked about Simple Service Discovery Using AWS Private Hosted Zones. the 2nd Watch Blog talked about Test/Dev in the Cloud for Digital Marketing. The Cloud Academy Blog discussed AWS Tags and What Makes Using Them so Important. The EC2 Container Service Team hosted an EC2 Container Service AUAA (Ask Us Almost Anything) on Reddit. The AWS Big Data Blog talked about Running a High Performance SAS Grid Manager Cluster on AWS with Intel Cloud Edition for Lustre. We updated AWS Tools for Windows PowerShell, AWS SDK for .NET, AWS CLI, AWS SDK for JavaScript, and AWS SDK for Java. The AWS Partner Network Blog announced that SoftNAS Cloud Version 3.3 is Now Available in the AWS Marketplace. George Böhnisch explained How to Resize a gp2 Backed EBS Volume on an HVM Instance in Amazon EC2. Friday, April 24 We added AWS Customer Succcess Stories from Arterys, Civis Analytics, Pond, QNAP and University of Maryland University College. We added to ObserveIT User Activity Session Recording, Talari Cloud Appliance CT800, Thelia, GoldDisk Plus Windows 2012 Hardened AMI, Elasticsearch + Kibana (powered by Sain3), and HiveShield to the AWS Marketplace. We added UltraCompare, UltraFinder, UltraSentry, TurboCAD LTE Pro v7, and Visual Studio Ultimate 2013 to the AWS Marketplace for Desktop Apps. APN Premier Consulting Partner Logicworks published an AWS-powered PartyLite case study. The Cloud Academy Blog published a Brief Introduction to AWS Infrastructure Management. Sunday, April 26 Upcoming Events Help Wanted Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed. Jeff; ### Christian Neukirchen #### 27apr2015 ### Lambda the Ultimate Forum #### What makes LtU more or less enjoyable? What makes LtU more or less enjoyable? These last few months I have been a bit frustrated by my own relation to LtU; I wondered on a few occasions whether I should stop visiting the website frequently, and I eventually sort of decided not to. It is not easy to voice exactly what the problem is, even less easy to know what a solution could be. I would say the observed symptom of the problem is simple: the proportion of LtU's discussion that I feel strongly interested in is decreasing. What about you? I think decrease in interest for LtU wouldn't be a frustrating problem if there was another place to supersede it. Unfortunately, I know of no such place: the alternative that is developping right now is a balkanization in a multitude of places, many of which are closed gardens (eg. Quora) I don't wish to help grow prosperous. (It's of course to be expected that not all LtU discussions interest everyone, as the topic is quite broad and people have different tastes about different subdomains.) Why am I less interested in LtU discussions? I think there has been at times a better balance between technical discussion around articles (articles mostly following the standards of academic presentation) and less-focused discussion, possibly more radical but less precise viewpoints. I don't think there are more less-focused / off-original-topic discussion than before, or too much of them, but rather that there not enough of the more structured technical comments. In particular, I mean no criticism of the current LtU members or discussions, which bring many interesting point of views -- there might be things to improve in this area, but I don't think that is where the real gains are. I would be more interested in attracting more "research discussions" but I don't know how to do it. On the positive sides, here are three examples of interactions that I personally enjoyed a lot recently, and would by themselves justify my continued LtU activity this year: • Tom Primožič linked the draft version of Andreas Rossberg "amazing" 1ML paper; without this link, I probably wouldn't have learned about this exciting work for a few months. • Sean McDirmid posted article versions of his work on type inference (and previously, Glitch), that helped make more precise interesting discussions that had been going on and off on LtU for a long time. • Robert Atkey saw an on-the-side remark inside the LtU submission on the very interesting work on incremental computation, and gave it enough thought to produce an amazing blog post -- that I'm sure will bear further fruits. (That's of course not the only things I liked on LtU recently. There are many things I come to know through LtU that I wouldn't otherwise learn about, typically on approaches to programming languages that are closer to social sciences (user psychology and experimental studies, sociology of adoption, etc.) What were your own "value moments" on LtU lately? ### StackOverflow #### Scala Annotation Inheritance In Scala, I have an annotation and a base trait with the annotation, but extending that class doesn't inherit the annotation: scala> import scala.annotation.StaticAnnotation import scala.annotation.StaticAnnotation scala> case class AnnotationClass() extends StaticAnnotation defined class AnnotationClass scala> @AnnotationClass trait BaseTrait defined trait BaseTrait scala> class InheritingClass extends BaseTrait defined class InheritingClass scala> import scala.reflect.runtime.universe._ import scala.reflect.runtime.universe._ scala> typeOf[BaseTrait].typeSymbol.asClass.annotations.size res1: Int = 1 scala> typeOf[InheritingClass].typeSymbol.asClass.annotations.size res0: Int = 0  Is there a way to get the subclass to inherit the annotation of the parent? #### How to achieve following transformation using Spark 1.3.1 DataFrame API? Just started with spark 1.3.1 and not able to figure out the best way to achieve the following transformation using the available APIs I have a DataFrame as follows: ## ID|TYPE|COUNT 1|A|12 1|B|10 2|A|7 2|B|9  Which needs to be transformed as: ## ID|A_COUNT|B_COUNT 1|12|10 2|7|9  Thanks in advance ! ### TheoryOverflow #### Is the Chomsky-hierarchy outdated? The Chomsky(–Schützenberger) hierarchy is used in textbooks of theoretical computer science, but it obviously only covers a very small fraction of formal languages (REG, CFL, CSL, RE) compared to the full Complexity Zoo Diagram. Does the hierarchy play any role in current research anymore? I found only little references to Chomsky here at cstheory.stackexchange, and in Complexity Zoo the names Chomsky and Schützenberger are not mentioned at all. Is current research more focused on other means of description but formal grammars? I was looking for practical methods to describe formal languages with different expressiveness, and stumbled upon growing context sensitive language (GCSL) and visibly pushdown languages (VPL), which both lie between the classic Chomsky languages. Shouldn't the Chomsky hierarchy be updated to include them? Or is there no use of selecting a specific hierarchy from the full set of complexity classes? I tried to select only those languages that can be fit in gaps of the Chomsky hierarchy, as far as I understand: REG (=Chomsky 3) ⊊ VPL ⊊ DCFL ⊊ CFL (=Chomsky 2) ⊊ GCSL ⊊ CSL (=Chomsky 1) ⊊ R ⊊ RE I still don't get where "mildly context-sensitive languages" and "indexed languages" fit in (somewhere between CFL and CSL) although there seems to be of practical relevance for natural language processing (but maybe anything of practical relevance is less interesting in theoretical research ;-). In addition you could mention GCSL ⊊ P ⊂ NP ⊂ PSPACE and CSL ⊊ PSPACE ⊊ R to show the relation to the famous classes P and NP. I found on GCSL and VPL: I'd also be happy if you know any more recent textbook on formal grammars that also deal with VPL, DCLF, GCSL and indexed grammars, preferable with pointers to practical applications. ### StackOverflow #### Storing a LatLon class as GeoJSON using MongoDB / Morphia (Scala or Java) I'm writing an application that needs to make use of geo-referenced data, and I'd like to use MongoDB + Morphia. The application is in Scala, but if a portion needs to be Java, that's ok (yay compatibility!) I have a class to represent the Latitude and Longitude of events: class LatLon { @BeanProperty var latDegrees : Double @BeanProperty var lonDegrees : Double }  It's not a very exciting class, but it is useful in this context. Now, I have an event that I record at a location: class ObservedEvent { @BeanProperty var observation : String = _ @Beanproperty var location : LatLon = _ }  Now, I have a ton of observed events and I want to store them in MongoDB with Morphia. The 'location' should be stored as a GeoJson Point so I can index the collection, etc. I have tried making SimpleValueConverter's, adapters, and a few other hacks, but I haven't been able to figure out how to make this work. It seems like such a common use-case that it would be built in. Hopefully the answer here is "It's built in! Look [here]". If it is, I haven't found it :( Thanks! ### Lobsters #### A Comparison of 5 Uniprocessor OS Scheduling Policies ### TheoryOverflow #### Clarification needed on a proposed algorithm in a published paper called PAU-DL There is an algorithm called Parallel Atom-Updating Dictionary Learning (PAU-DL) in the context of image and video processing, dictionary learning. I read about it in this published paper: Learning Overcomplete Dictionaries Based on Atom-by-Atom Updating This sections says in order to update the whole dictionary by updating atoms one by one which might have the problem of updating the atom d_k (kth column of D) the updated version of its previous columns is used while d_k,...,d_{K} haven't been yet updated. Then they claim the proposed algorithm updates atoms in parallel. I wonder how this is possible. I mean, KSVD is about updating each atom of D and then updating the corresponding row in the coefficient matrix. What do we do in the parallel form of the algorithm? Please shed some lights on the problem so that a comparison between the two algorithms could be done using the information you give in your answer. I failed to find any online sources that explain the new algorithm. Thank you ### StackOverflow #### Receiving multipart messages with clrzmq About multipart messages Zguide states:  "You will receive all parts of a message, or none at all."  http://zguide.zeromq.org/php:chapter2#Multipart-Messages However, on the clrzmq the method for (nonblocking) receive of multipart messages is: /// The <paramref name="frameTimeout"/> will be used for each underlying Receive operation. If the timeout /// elapses before the last message is received, an incomplete message will be returned. ... public static ZmqMessage ReceiveMessage(this ZmqSocket socket, TimeSpan frameTimeout)  https://github.com/zeromq/clrzmq/blob/master/src/ZeroMQ/SendReceiveExtensions.cs This seems contradictory. Under what circumstances can ReceiveMessage return an incomplete message? What is the proper way to nonblockingly via clrzmq ask for the next complete multipart message so that I either get a complete multipart message or nothing at all (and the message that is possibly partly arrived is available later)? #### scala json validation not working I am following **Play For Scala ** for validation and parsing of Json I am receiving a request in controller after converting it to JsValue like this  val jsonRequest = request.body.asJson.get  i am trying to validate it like this jsonRequest.validate[ReadArtWorkCaseClass].fold( valid = { readArtWorkCaseClass => log.info("valid block") Ok("validation successful" ) }, invalid = { log.info("invalid block") errors => { log.info("error block") BadRequest(JsError.toFlatJson(errors)) } } )  i have implemented Read for this case class ReadArtWorkCaseClass(artworkid :String, artistid :String, institutionid :String , status :String, groupactivityid:String, details:String, pricehistoryid :String, sku :String, dimensions :String, artworkname :String, artworkseries :String , workclassifier :String , genreid :String, artworktype :String, createddate:String) object ReadArtWorkCaseClass { implicit val artworkReads: Reads[ReadArtWorkCaseClass] = ( (JsPath \ "artWorkid").read[String] and (JsPath \ "artistid").read[String] and (JsPath \ "institutionid").read[String] and (JsPath \ "activationStatus").read[String] and (JsPath \ "groupactivityid").read[String] and (JsPath \ "details").read[String] and (JsPath \ "pricehistoryid").read[String] and (JsPath \ "sku").read[String] and (JsPath \ "dimensions").read[String] and (JsPath \ "artworkname").read[String] and (JsPath \ "artworkseries").read[String] and (JsPath \ "workclassifier").read[String] and (JsPath \ "genreid").read[String] and (JsPath \ "artworktype").read[String] and (JsPath \ "createddate").read[String] )(ReadArtWorkCaseClass.apply _) }  when i tried to validate jsonrequest by inputing empty fields it does not go into the invalid block instead runs the valid block please guide me what is my mistake ### Planet Clojure #### Clojure/West 2015: Notes from Day One ### Life of a Clojure Expression • John Hume, duelinmarkers.com (DRW trading) • a quick tour of clojure internals • giving the talk in org mode (!) • disclaimers: no expert, internals can change, excerpts have been mangled for readability • most code will be java, not clojure • (defn m [v] {:foo “bar” :baz v}) • minor differences: calculated key, constant values, more than 8 key/value pairs • MapReader called from static array of IFns used to track macros; triggered by ‘{‘ character • PersistentArrayMap used for less than 8 objects in map • eval treats forms wrapped in (do..) as a special case • if form is non-def bit of code, eval will wrap it in a 0-arity function and invoke it • eval’s macroexpand will turn our form into (def m (fn [v] {:foo “bar :baz v})) • checks for duplicate keys twice: once on read, once on analyze, since forms for keys might have been evaluated into duplicates • java class emitted at the end with name of our fn tacked on, like: class a_mapm • intelli-j will report a lot of unused methods in the java compiler code, but what’s happening is the methods are getting invoked, but at load time via some asm method strings • no supported api for creating small maps with compile-time constant keys; array-map is slow and does a lot of work it doesn’t need to do ### Clojure Parallelism: Beyond Futures • Leon Barrett, the climate corporation • climate corp: model weather and plants, give advice to farmers • wrote Claypoole, a parallelism library • map/reduce to compute average: might use future to shove computation of the average divisor (inverse of # of items) off at the beginning, then do the map work, then deref the future at the end • future -> future-call: sends fn-wrapped body to an Agent/soloExecutor • concurrency vs parallelism: concurrency means things could be re-ordered arbitrarily, parallelism means multiple things happen at once • thread pool: recycle a set number of threads to avoid constantly incurring the overhead of creating a new thread • agent thread pool: used for agents and futures; program will not exit while threads are there; lifetime of 60 sec • future limitations • tasks too small for the overhead • exceptions get wrapped in ExecutionException, so your try/catches won’t work normally anymore • pmap: just a parallel map; lazy; runs N-cpu + 3 tasks in futures • generates threads as needed; could have problems if you’re creating multiple pmaps at once • slow task can stall it, since it waits for the first task in the sequence to complete for each trip through • also wraps exceptions just like future • laziness and parallelism: don’t mix • core.async • channels and coroutines • reads like go • fixed-size thread pool • handy when you’ve got a lot of callbacks in your code • mostly for concurrency, not parallelism • can use pipeline for some parallelism; it’s like a pmap across a channel • exceptions can kill coroutines • claypoole • pmap that uses a fixed-size thread pool • with-shutdown! will clean up thread pool when done • eager by default • output is an eagerly streaming sequence • also get pfor (parallel for) • lazy versions are available; can be better for chaining (fast pmap into slow pmap would have speed mismatch with eagerness) • exceptions are re-thrown properly • no chunking worries • can have priorities on your tasks • reducers • uses fork/join pool • good for cpu-bound tasks • gives you a parallel reduce • tesser • distributable on hadoop • designed to max out cpu • gives parallel reduce as well (fold) • tools for working with parallelism: • promises to block the state of the world and check things • yorkit (?) for jvm profiling ### Boot Can Build It • Alan Dipert and Micha Niskin, adzerk • why a new build tool? • build tooling hasn’t kept up with the complexity of deploys • especially for web applications • builds are processes, not specifications • most tools: maven, ant, oriented around configuration instead of programming • boot • many independent parts that do one thing well • composition left to the user • maven for dependency resolution • builds clojure and clojurescript • sample boot project has main method (they used java project for demo) • uses ‘–‘ for piping tasks together (instead of the real |) • filesets are generated and passed to a task, then output of task is gathered up and sent to the next task in the chain (like ring middleware) • boot has a repl • can do most boot tasks from the repl as well • can define new build tasks via deftask macro • (deftask build …) • (boot (watch) (build)) • make build script: (build.boot) • #!/usr/bin/env boot • write in the clojure code defining and using your boot tasks • if it’s in build.boot, boot will find it on command line for help and automatically write the main fn for you • FileSet: immutable snapshot of the current files; passed to task, new one created and returned by that task to be given to the next one; task must call commit! to commit changes to it (a la git) • dealing with dependency hell (conflicting dependencies) • pods • isolated runtimes, with own dependencies • some things can’t be passed between pods (such as the things clojure runtime creates for itself when it starts up) • example: define pod with env that uses clojure 1.5.1 as a dependency, can then run code inside that pod and it’ll only see clojure 1.5.1 ### One Binder to Rule Them All: Introduction to Trapperkeeper • Ruth Linehan and Nathaniel Smith; puppetlabs • back-end service engineers at puppetlabs • service framework for long-running applications • basis for all back-end services at puppetlabs • service framework: • code generalization • component reuse • state management • lifecycle • dependencies • why trapperkeeper? • influenced by clojure reloaded pattern • similar to component and jake • puppetlabs ships on-prem software • need something for users to configure, may not have any clojure experience • needs to be lightweight: don’t want to ship jboss everywhere • features • turn on and off services via config • multiple web apps on a single web server • unified logging and config • simple config • existing services that can be used • config service: for parsing config files • web server service: easily add ring handler • nrepl service: for debugging • rpc server service: nathaniel wrote • demo app: github -> trapperkeeper-demo • anatomy of service • protocol: specifies the api contract that that service will have • can have any number of implementations of the contract • can choose between implementations at runtime • defservice: like defining a protocol implementation, one big series of defs of fns: (init [this context] (let …))) • handle dependencies in defservice by vector after service name: [[:ConfigService get-in-config] [:MeowService meow]] • lifecycle of the service: what happens when initialized, started, stopped • don’t have to implement every part of the lifecycle • config for the service: pulled from file • supports .json, .edn, .conf, .ini, .properties, .yaml • can specify single file or an entire directory on startup • they prefer .conf (HOCON) • have to use the config service to get the config values • bootstrap.cfg: the config file that controls which services get picked up and loaded into app • order is irrelevant: will be decided based on parsing of the dependencies • context: way for service to store and access state locally not globally • testing • should write code as plain clojure • pass in context/config as plain maps • trapperkeeper provides helper utilities for starting and stopping services via code • with-app-with-config macro: offers symbol to bind the app to, plus define config as a map, code will be executed with that app binding and that config • there’s a lein template for trapperkeeper that stubs out working application with web server + test suite + repl • repl utils: • start, stop, inspect TK apps from the repl: (go); (stop) • don’t need to restart whole jvm to see changes: (reset) • can print out the context: (:MeowService (context)) • trapperkeeper-rpc • macro for generating RPC versions of existing trapperkeeper protocols • supports https • defremoteservice • with web server on one jvm and core logic on a different one, can scale them independently; can keep web server up even while swapping out or starting/stopping the core logic server • future: rpc over ssl websockets (using message-pack in transit for data transmission); metrics, function retrying; load balancing ### Domain-Specific Type Systems • Nathan Sorenson, sparkfund • you can type-check your dsls • libraries are often examples of dsls: not necessarily macros involved, but have opinionated way of working within a domain • many examples pulled from “How to Design Programs” • domain represented as data, interpreted as information • type structure: syntactic means of enforcing abstraction • abstraction is a map to help a user navigate a domain • audience is important: would give different map to pedestrian than to bus driver • can also think of abstraction as specification, as dictating what should be built or how many things should be built to be similar • showing inception to programmers is like showing jaws to a shark • fable: parent trap over complex analysis • moral: types are not data structures • static vs dynamic specs • static: types; things as they are at compile time; definitions and derivations • dynamic: things as they are at runtime; unit tests and integration tests; expressed as falsifiable conjectures • types not always about enforcing correctness, so much as describing abstractions • simon peyton jones: types are the UML of functional programming • valuable habit: think of the types involved when designing functions • spec-tacular: more structure for datomic schemas • from sparkfund • the type system they wanted for datomic • open source but not quite ready for public consumption just yet • datomic too flexible: attributes can be attached to any entity, relationships can happen between any two entities, no constraints • use specs to articulate the constraints • (defspec Lease [lesse :is-a Corp] [clauses :is-many String] [status :is-a Status]) • (defenum Status …) • wrote query language that’s aware of the defined types • uses bi-directional type checking: github.com/takeoutweight/bidirectional • can write sensical error messages: Lease has no field ‘lesee’ • can pull type info from their type checker and feed it into core.typed and let core.typed check use of that data in other code (enforce types) • does handle recursive types • no polymorphism • resources • practical foundations for programming languages: robert harper • types and programming languages: benjamin c pierce • study haskell or ocaml; they’ve had a lot of time to work through the problems of types and type theory • they’re using spec-tacular in production now, even using it to generate type docs that are useful for non-technical folks to refer to and discuss; but don’t feel the code is at the point where other teams could pull it in and use it easily ### ClojureScript Update • David Nolen • ambly: cljs compiled for iOS • uses bonjour and webdav to target ios devices • creator already has app in app store that was written entirely in clojurescript • can connect to device and use repl to write directly on it (!) ### Clojure Update • Alex Miller • clojure 1.7 is at 1.7.0-beta1 -> final release approaching • transducers coming • define a transducer as a set of operations on a sequence/stream • (def xf (comp (filter? odd) (map inc) (take 5))) • then apply transducer to different streams • (into [] xf (range 1000)) • (transduce xf + 0 (range 1000)) • (sequence xf (range 1000)) • reader conditionals • portable code across clj platforms • new extension: .cljc • use to select out different expressions based on platform (clj vs cljs) • #?(:clj (java.util.Date.) :cljs (js/Date.)) • can fall through the conditionals and emit nothing (not nil, but literally don’t emit anything to be read by the reader) • performance has also been a big focus • reduced class lookups for faster compile times • iterator-seq is now chunked • multimethod default value dispatch is now cached ### StackOverflow #### Clojure threading first macro -> with Math/pow or any other multiple args function How to write in one line the following code: (-> 10 pow9)  where pow9 is: (def pow9 (partial #(Math/pow % 9)))  If I write (-> 10 (partial #(Math/pow % 9))) I will get back #<corepartialfn__4228 clojure.corepartialfn__4228@62330c23> instead, writing (-> 10 #(Math/pow % 9)) fails with CompilerException java.lang.ClassCastException: java.lang.Long cannot be cast to clojure.lang.ISeq, compiling:(NO_SOURCE_PATH:1:1), although (-> 10 pow9) works fine. The more general question is how to use -> with function which accepts more than one argument, i.e. how to make this work (-> 10 #(+ % 10))? #### How to call a method in a catch clause on an object defined in a try clause? I am creating a redis pubsub client in a try-catch block. In the try block, the client is initialised with a callback to forward messages to a client. If there's a problem sending the message to the client, an exception will be thrown, in which case I need to stop the redis client. Here's the code: try { val redisClient = RedisPubSub( channels = Seq(currentUserId.toString), patterns = Seq(), onMessage = (pubSubMessage: PubSubMessage) => { responseObserver.onValue(pubSubMessage.data) } ) } catch { case e: RuntimeException => // redisClient isn't defined here... redisClient.unsubscribe(currentUserId.toString) redisClient.stop() messageStreamResult.complete(Try(true)) responseObserver.onCompleted() }  The problem is that the redis client val isn't defined in the catch block because there may have been an exception creating it. I also can't move the try-catch block into the callback because there's no way (that I can find) of referring to the redisClient object from within the callback (this doesn't resolve). To solve this I'm instantiating redisClient as a var outside the try-catch block. Then inside the try block I stop the client and assign a new redisPubSub (created as above) to the redisClient var. That's an ugly hack which is also error prone (e.g. if there genuinely is a problem creating the second client, the catch block will try to call methods on an erroneous object). Is there a better way of writing this code so that I can correctly call stop() on the redisClient if an exception is raised when trying to send the message to the responseObserver? Update I've just solved this using promises. Is there a simpler way though? #### Clojure Architecture like Uncle Bob did I am trying to implement Clojure architecture like Uncle Bob did there http://blog.8thlight.com/uncle-bob/2012/08/13/the-clean-architecture.html and like he describe in clean code in Episode 07 - Architecture, Use Cases and High Level Design. Nothing in an inner circle can know anything at all about something in an outer circle. I want to code core of app with all business rules and tests. This core has to have definitions of operations on "objects" in database like user, payment, advertisement etc. But the implementation of how this should be done has to be on higher level of application. So the question is: can you give me an example of good architecture application on github like on image with circles? I am learning Clojure and I want to see how technically it can be done. I am trying to do it myself but I have bad results. Simple example of code will help me a lot. I want to know how create layers in Clojure like on image step by step. I will be glad for any information on how to do that with high quality in Clojure. Can be code, video or article. Can be free or can buy. ### QuantOverflow #### Creating correlated Brownian motions from independent ones This is the exercise from book "Stochastic Calculate for Finance Volume 2", Page 199. Let (W_{1}(t),...,W_{d}(t)) be a d-dimensional Brownian motion. (\sigma_{ij}(t))_{m\times d} be an matrix-valued process adapted to the filtration associated with Brownian motion. Define \sigma_{i}(t) = [\sum_{j=1}^{d}\sigma_{ij}^{2}]^{1/2} and assume this is never zero. Define also B_{i}(t) = \sum_{j=1}^{d} \int_{0}^{t} \frac{\sigma_{ij}(u)}{\sigma_{i}(u)}dW_{j}(u) Show that, for each i, B_{i} is a Brownian motion. #### Validation of Bates SVJ model I have just finished implementing the Bates model for pricing European call options. To check results, I have been looking for a validation set where I could see the Bates parameter values and their associated call prices. However, I have not been able to obtain this information online. Could anyone point me to any paper or reference where parameter values and call prices for the Bates model are given? ### StackOverflow #### Compojure handler friend/authenticate eats body of POST request How can I safely get the content of the :body InputStream from compojure? See related but different question for background. I'm trying to authenticate my ring routes with Friend using compojure handler/site but when I try to read the :body from an http POST request (which is a Java InputStream), it is closed: 23:01:20.505 ERROR [io.undertow.request] (XNIO-1 task-3) Undertow request failed HttpServerExchange{ POST /paypal/ipn} java.io.IOException: UT000034: Stream is closed at io.undertow.io.UndertowInputStream.read(UndertowInputStream.java:84) ~[undertow-core-1.1.0.Final.jar:1.1.0.Final] at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284) ~[na:1.8.0_45] at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326) ~[na:1.8.0_45] at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178) ~[na:1.8.0_45] at java.io.InputStreamReader.read(InputStreamReader.java:184) ~[na:1.8.0_45] at java.io.BufferedReader.fill(BufferedReader.java:161) ~[na:1.8.0_45] at java.io.BufferedReader.read(BufferedReader.java:182) ~[na:1.8.0_45] at clojure.coreslurp.doInvoke(core.clj:6650) ~[clojure-1.7.0-beta1.jar:na] at clojure.lang.RestFn.invoke(RestFn.java:410) ~[clojure-1.7.0-beta1.jar:na]  If I remove the handler, the problem goes away. I've found one possible solution called groundhog that captures and stores all requests. The library I'm using, clojure-paypal-ipn originally called reset on the stream, but that is not supported by Undertow (or indeed several other Java/Clojure servers), so I worked around it. Here are some snippets of my code: (defroutes routes ... (POST "/paypal/ipn" [] (payment/paypal-ipn-handler payment/paypal-data payment/paypal-error paypal-sandbox?)) (route/resources "/")) (defn authenticate-routes "Add Friend handler to routes" [routes-set] (handler/site (friend/authenticate routes-set friend-settings))) ;; middleware below from immutant.web.middleware (defn -main [& {:as args}] (web/run (-> routes (web-middleware/wrap-session {:timeout 20}) (authenticate-routes) ; use friend handler ;; wrap the handler with websocket support ;; websocket requests will go to the callbacks, ring requests to the handler (web-middleware/wrap-websocket websocket-callbacks)) args))  And here are the guts of payment.clj (paypal-data and paypal-error just pprint input right now): (defn req->body-str [req] "Get request body from ring POST http request" (let [input-stream (:body req)] (let [raw-body-str (slurp input-stream)] raw-body-str))) (defn paypal-ipn-handler ([on-success on-failure] (paypal-ipn-handler on-success on-failure true)) ([on-success on-failure sandbox?] (fn [req] (let [body-str (req->body-str req) ipn-data (paypal/parse-paypal-ipn-string body-str)] (do (.start (Thread. (fn [] (paypal/handle-ipn ipn-data on-success on-failure sandbox?)))) ; respond to PayPal right away, then go and process the ipn-data {:status 200 :headers {"Content-Type" "text/html"} :body ""})))))  #### How can I find the index of the maximum values along rows of matrix Spark Scala? I have a question about finding index of the maximum values along rows of matrix. How can I do this in Spark Scala? This function would be like argmax in numpy in Python. ### CompsciOverflow #### Example of worst case input for Build-Max-Heap Is there a worst-case inputs for Build-Max-Heap? I know there is but I just couldn't paint a clear picture of it in my head. ### StackOverflow #### Are there any pitfalls to using Any vs specific argument type? Suppose I want a method to output something, a string or integer in this case. I can do it like this: def outString(str: String) { str // or "return str" }  and run it like this: outString("foo") But I can also avoid initializing a specific type as an argument and it'll work: def outString(str: Any) { str }  and run it either like this: outString("foo") or outString(123). Given that they both work and assuming a situation where you don't always know the type of the argument passed, is there any pitfalls in using Any over specific argument types? Does Any do any type of automatic type checking like interpreted languages do that would slow the code down? #### spray akka deployment on webserver I have an application built on spray + akka. using this guide: http://sysgears.com/articles/building-rest-service-with-scala/ It explains this example: https://github.com/oermolaev/simple-scala-rest-example The application is working just fine. But when trying to deploy on a webServer I didn't find a way to do that. I've tried to use xsbt-web-plugin to deploy on Tomcat, got the following input:  ~container:start  [info] starting server ... Adding Context for target/webapp ... Starting service Tomcat Starting Servlet Engine: Apache Tomcat/7.0.34 org.apache.catalina.startup.ContextConfig getDefaultWebXmlFragment INFO: No global web.xml found org.apache.coyote.AbstractProtocol start INFO: Starting ProtocolHandler ["http-nio-8080"] But Tomcat is returning 404 for all the requests. Does someone know how can I deploy a spray akka application on Tomcat? ### infra-talk #### Practical fault detection on timeseries part 2: first macros and templates In the previous fault detection article, we saw how we can cover a lot of ground in fault detection with simple methods and technology that is available today. It had an example of a simple but effective approach to find sudden spikes (peaks and drops) within fluctuating time series. This post explains the continuation of that work and provides you the means to implement this yourself with minimal effort. I'm sharing with you: • Bosun macros which detect our most common not-trivially-detectable symptoms of problems • Bosun notification template which provides a decent amount of information • Grafana and Graph-Explorer dashboards and integration for further troubleshooting We reuse this stuff for a variety of cases where the data behaves similarly and I suspect that you will be able to apply this to a bunch of your monitoring targets as well. read more ### StackOverflow #### Scala - Does pattern matching break the Open-Closed principle? [duplicate] This question already has an answer here: First of all, I know this question has been asked previously here, but it wasn't clear for me. Pattern matching is used to make a function react to different types of data. One would say that if my Pattern Matching case has 4 cases and one month later I need to add a 5th one, I'll be breaking the Open-Closed Principle. I agree to that. In a worst case scenario: Let's suppose I'm using a closed library (I can't touch code inside it) and I need to extend its functionality. The functionality I want to extend is indeed a Pattern Matching function. What should I do? I think pattern matching is OK if I'm totally sure my Object doesn't change very often and will never require to be extended by others. What's your opinion about using this technique? This is more like a debate than a question. Thanks, #### i want to store each rdd into database in twitter streaming using apache spark but got error of task not serialize in scala i write a code in which twitter streaming take a rdd of tweet class and store each rdd in database but it got error task not serialize i past the code here pls help me sparkstreaming.scala case class Tweet(id: Long, source: String, content: String, retweet: Boolean, authName: String, username: String, url: String, authId: Long, language: String) trait SparkStreaming extends Connector { def startStream(appName: String, master: String): StreamingContext = { val db = connector("localhost", "rmongo", "rmongo", "pass") val dbcrud = new DBCrud(db, "table1") val sparkConf: SparkConf = new SparkConf().setAppName(appName).setMaster(master).set(" spark.driver.allowMultipleContexts", "true").set("spark.serializer", "org.apache.spark.serializer.KryoSerializer") // .set("spark.kryo.registrator", "HelloKryoRegistrator") // sparkConf.registerKryoClasses(Array(classOf[DBCrud])) val sc: SparkContext = new SparkContext(sparkConf) val ssc: StreamingContext = new StreamingContext(sc, Seconds(10)) ssc } } object SparkStreaming extends SparkStreaming  i use this streaming context in plat controller to store tweets in database but its throw exception i m using mongodb to store it def streamstart = Action { val stream = SparkStreaming val a = stream.startStream("ss", "local[2]") val db = connector("localhost", "rmongo", "rmongo", "pass") val dbcrud = DBCrud val twitterauth = new TwitterClient().tweetCredantials() val tweetDstream = TwitterUtils.createStream(a, Option(twitterauth.getAuthorization)) val tweets = tweetDstream.filter { x => x.getUser.getLang == "en" }.map { x => Tweet(x.getId, x.getSource, x.getText, x.isRetweet(), x.getUser.getName, x.getUser.getScreenName, x.getUser.getURL, x.getUser.getId, x.getUser.getLang) } // tweets.foreachRDD { x => x.foreach { x => dbcrud.insert(x) } } tweets.saveAsTextFiles("/home/knoldus/sentiment project/spark services/tweets/tweets") // val s=new BirdTweet() // s.hastag(a.sparkContext) a.start() Ok("start streaming") }  when make a single of streaming which take a tweets and use forEachRDD to store each tweet then its work but if i use it from outside its not works please help me ### CompsciOverflow #### Is a LBA with stack more powerful than a LBA without? Even so a linear bounded automata (LBA) is strictly more powerful than a pushdown automata (PDA), adding a stack to a LBA might make it more powerful. A LBA with stack should not be Turing complete, because its halting problem should be decidable. (It seems to be equivalent to the halting problem for a PDA). Can a deterministic LBA with stack decide all problems decided by a nondeterministic LBA? Or on the other hand, maybe a LBA with stack is (provably) not more powerful than a LBA? Edit I think I found out how a deterministic LBA with stack can simulate a nondeterministic LBA. The stack can be used to store and recall the current state of the external memory (the linear bounded memory) as often as needed. The internal state is finite, so there is a global bound for the maximal number of nondeterministic moves available in a single step. So backtracking can be used to recursively simulate the result of each nondeterministic move. I will have to think about how to make these two observations (halting problem is decidable, nondeterministic LBA can be simulated deterministically) more rigorous, before self-answering. I'm away next week, so don't hold your breath. ### Planet Theory #### Whales of the Web The average web site has links connecting it with 29 other sites. I came up with this number in the following way. A data set I’ve been playing with for a few weeks lists 43 million web sites and 623 million links between sites; the quotient of those numbers is about 14.5. Since each link has two ends, the per-site total of inbound plus outbound links is double the quotient. Twenty-nine links is the average, but by no means is it a typical number of links. Almost half of the sites have four or fewer links. At the other end of the spectrum, the most-connected web site (blogspot.com) has almost five million links, and there are six more sites with at least a million each. The distribution of link numbers—the degree sequence—looks like this (both scales are logarithmic, base 2): I want to emphasize that these are figures for web sites, not web pages. The unit of aggregation is the “pay-level domain”—the domain name you have to pay to register. Examples are google.com or bbc.co.uk. Subdomains, such as maps.google.com, are all consolidated under the main google.com entry. Any number of links from pages on site A to pages on site B are recorded as a single link from A to B. The source of these numbers is the Web Data Commons, a project overseen by a group at the University of Mannheim. They extracted the lists of domains and the links between them from a 2012 data set compiled and published by the Common Crawl Foundation (which happens to be the subject of my latest American Scientist column). The Common Crawl does essentially the same thing as the big search engines—download the whole Web, or some substantial fraction of it—but the Common Crawl makes the raw data publicly available. There are interesting questions about both ends of the degree sequence plotted above. At the far left, why are there so many millions of lonely, disconnected web sites, with just one or two links, or none at all? I don’t yet feel I know enough to tell the story of those orphans of the World Wide Web. I’ve been focused instead on the far right of the graph, on the whales of the Web, the handful of sites with links to or from many thousands of other sites. From the set of 43 million sites, I extracted all those with at least 100,000 inbound or outbound links; in other words, the criterion for inclusion in my sample was \(\min(indegree, outdegree) \ge 100,000$$. It turns out that just 112 sites qualify. In the diagram below, they are grouped according to their top-level domain (com, org, de, and so on). The size of the colored dot associated with each site encodes the total number of links; the color indicates the percent of those links that are incoming. Hover over a site name to see the inbound, outbound and bidirectional links between that site and the other members of this elite 112. (The diagram was built with Mike Bostock’s d3.js framework, drawing heavily on this example.) Patience, please . . . The bright red dots signify a preponderance of outgoing links, with relatively few incoming ones. Many of these sites are directories or catalogs, with lists of links classified by subject matter. Such “portal sites” were popular in the early years of the Web, starting with the World Wide Web Home at CERN, circa 1994; another early example was Jerry and David’s Guide to the World Wide Web, which evolved into Yahoo. Search engines have swept aside many of those hand-curated catalogs, but there are still almost two dozen of them in this data set. Curiously, the Netherlands and Germany (nl and de) seem to be especially partial to hierarchical directories. Bright blue dots are rarer than red ones; it’s easier to build a site with 100,000 outbound links than it is to persuade 100,000 other sites to link to yours. The biggest blue dot is for wordpress.org, and I know the secret of that site’s popularity. If you have a self-hosted WordPress blog (like this one), the software comes with a built-in link back to home base. Another conspicuous blue dot is gmpg.org, which mystified me when I first noticed that it ranks fourth among all sites in number of incoming links. Having poked around at the site, I can now explain. GMPG is the Global Multimedia Protocols Group, a name borrowed from the Neal Stephenson novel Snow Crash. In 2003, three friends created a real-world version of GMPG as a vehicle for the XHTML Friends Network, which was conceived as a nonproprietary social network. One of the founders was Matt Mullenweg, who was also the principal developer of WordPress. Hence every copy of WordPress includes a link to gmpg.org. (The link is in the <head> section of the HTML file, so you won’t see it on the screen.) At this point GMPG looks to be a moribund organization, but nonetheless more than a million web sites have links to it. Networkadvertising.org is the web site of a trade group for online advertisers. Presumably, its 143,863 inbound links are embedded in ads, probably in connection with the association’s opt-out program for behavioral tracking. (To opt out, you have to accept a third-party cookie, which most people concerned about privacy would refuse to do.) Still another blue-dot site, miibeian.gov.cn, gets its inward links in another way. If I understand correctly, all web sites hosted in China are required to register at miibeian.gov.cn, and they must place a link back to that site on the front page. (If this account is correct, the number of inbound links to miibeian.gov.cn tells us the number of authorized web sites in China. The number in the 2012 data is 289,605, which seems low.) One final observation I find mildly surprising: Measured by connectivity, these 112 sites are the largest on the entire Web, and you might think they would be reasonably stable over time. But in the three years since the data were collected, 10 percent of the sites have disappeared altogether: Attempts to reach them either time out or return a connection error. At least a few more sites have radically changed their character. For example, serebella.com was a directory site that had almost 700,000 outbound links in 2012; it is now a domain name for sale. Among web sites, it seems, nobody is too big to fail. The table below lays out the numbers for the 112 sites. It’s sortable: Click on any of the column headers to sort on that field; click again to reverse the ordering. If you’d like to play with the data yourself, download the JSON file. site inlinks outlinks total links % inbound ### Lobsters #### EU study recommends OpenBSD ### StackOverflow #### How to convert a Scala Array[Byte] to Java byte[]? I have an Akka application with actors written in Scala and others in Java. In one case a Scala Actor writes an Array[Byte] and I need to deserialize this from a Java Actor. In this use-case I ultimately need a String representation in Java of the Array[Byte] so that would also solve my problem. Scala Actor: val outputStream = new java.io.ByteArrayOutputStream() val bufferedOutputStream = new java.io.BufferedOutputStream(outputStream, 1024) val exitCode : Integer = processBuilder #> bufferedOutputStream ! bufferedOutputStream.flush val content = outputStream.toByteArray // this gives an Array[Byte] javaActorRef.tell(content, getSelf())  Java Actor: /** * {@inheritDoc} */ @Override public void onReceive(Object object) throws Exception { // object has a Scala Array[Byte] how do I convert here to // byte[] or to String?  #### On the 4Clojure website, how do you see your previous problems, so you can return to one of them? I'm doing problems at 4Clojure. Some how, when I finished a problem (around #22), a link appeared that jumped me to problem #35. I want to do the problems that were skipped. But when I'm logged in to 4Clojure, the website only shows me problems after the last one I completed, which is #34. Is there way to see all the problems (completed and uncompleted) at 4Clojure, so I can do the skipped ones? ### TheoryOverflow #### Entropy of sum of two dependent variables How do I bound entropy of sum of two variables which are: 1. Fully Dependent( Circularly rotated of same series)$ H(X(N)+X(N-\tau))$1. Partially dependent random variables$X$and$Y$? ### /r/netsec #### The Chinese Hackers Who Are Actually Not Trying to Hack You ### QuantOverflow #### How to differentiate a brownian motion? By definition a wiener process cannot be differentiated. But when we use Ito's lemma on F = X^2, where X is wiener process we have total change in dF = 2XdX + dt  How can we calculate dF/dX when by definition it cannot be differentiated? Isin't this contradiction by definition? ### Planet Emacsen #### Irreal: Some Dired Tips Marcin Borkowski (mbork) has a nice post on Emacs dired tips. Dired is amazingly powerful but most of us don't begin to take advantage of its capabilities. Mbork looks at a few things you can do to enhance your workflow with dired. Many of them I was (and you probably are) familiar with but I did learn about how to omit “uninteresting” files. He covers several areas so you should take a look. If you learn even one thing you didn't know, it will be time well spent. ### StackOverflow #### pattern for return values In validate methods what kind of return types or return value patterns are preferred? If a validate function returns success/failure and failed fields as a result, should the function return, array with {true/false, array of error fields]} or true on success and array of error fields in case of failure #### Spray marshalling collection of ToResponseMarshallable I have a simple Spray scenario that doesn't work as I've expected. I have a spray server that dispatches work to different modules, and then return their composed responses. As I don't want to limit Module responses, I chose that the Module's method return ToResponseMarshallable, as I have Modules that need to return a plain String. Module signature could be: def moduleProcessing(): ToResponseMarshallable = randomString() And my "complete" block look similar to this: complete { val response1 = moduleProcessing() val response2 = moduleProcessing() Seq(response1,response2) }  In the previous example, I would expect to get: [{"someRandomString"},{"anotherRandomString"}] But I am getting: [{},{}] Of course it will propagate as expected, if I return a single response or if I change the signature of the moduleProcessing return type to any Marshallable Type. Thanks in advance for your help! ### CompsciOverflow #### How do I start extract standards from data? I have a database which has machine data ( tables like: machine cycles, machine phases and many more ) and I am suppose to analysis this data. Meaning, with the data present in different tables, I have to filter out or rather define standard machine cycles, standard machine phase etc. Hence, can you please help me with the starting point, like, how should I start my analysis in defining standard cycles, standard phase of machines with the present data available in the database. #### Multiclass classification with growing number of classes I have a dataset of music listening history: when it was listened, where it was listened, what was the weather outside (and many more other features are coming soon) and a track_id as a label. I want to run a multiclass classification on this data but I have these problems: • Constantly mapping my track_ids to classes [0..distinct_trackid_count) and back • I have a huge number of classes (tens of thousands) • The number of classes is constantly growing, so I always have to retrain my algorithm from the start I have a feeling that multiclass classification is not what I need here, and I need help in figuring out how to approach this problem ### UnixOverflow #### How to measure PCI-Express bus usage? I'm looking for a way to find out if PCIe bus is the bottleneck or not. It's not a problem to measure how much bytes was transferred through any particular NIC: Is there a way to find how much data was transferred to all the other PCIe devices (hard drives, video cards, etc.)? ### /r/compsci #### Powerpoint slides for this book? I am looking for powerpoint slides of Computer Graphics: Principles and Practice, 3rd edition. Are these available for the entire book? Like these ones -http://cs.brown.edu/courses/cs123/lectures/CS123.L23_Acceleration_Data_Structures_11.6.14.pdf submitted by Ganda2 [link] [comment] ### DataTau #### Five Takeaways on the State of Natural Language Processing ### StackOverflow #### Spray, ScalaTest and HTTP services: could not find implicit value for parameter ta: I am, again, struggling with spray and cannot set up a test correctly. I looked at the similar question spray-testkit: could not find implicit value for parameter ta: and the official spray template and cannot figure out what I am missing and/or doing wrong. I have a very simple service: trait SimpleService extends HttpService{ val fooRoute = path("foo") { get { complete("bar") } } }  And I have a very simple test: class SimpleServiceTest extends FlatSpec with Matchers with SimpleService with ScalatestRouteTest { override implicit def actorRefFactory: ActorRefFactory = system "The service" should "return OK status when getting /foo" in { Get("/foo") ~> fooRoute ~> check { status should be(OK) } } }  when I try to compile this, I get the following error: Error:(17, 17) could not find implicit value for parameter ta: SimpleServiceTest.this.TildeArrow[spray.routing.RequestContext,Unit] Get("/foo") ~> fooRoute ~> check { ^  Can anyone help me and tell me what I am missing? I do not find anything unusual, and I am close to evaluating Play instead of spray. #### spray-can webservice graceful shutdown I have spray.io based webservice, it runs as standalone jar (I use sbt assembly and then just java -jar myws.jar). It has pretty the same bootsrap as in spray examples, like this: /** Bootstrap */ object Boot extends App { // we need an ActorSystem to host our application in implicit val system = ActorSystem("my-system") // create and start our service actor val service = system.actorOf(Props[MyServiceActor], "my-ws-service") implicit val timeout = Timeout(10.seconds) CLIOptionsParser.parse(args, CLIOptionsConfig()) map { config => // start a new HTTP server IO(Http) ? Http.Bind(service, interface = config.interface, port = config.port) } }  Now I just run the process in the backgroud with java -jar my-service "$@" & and stop with kill -9 pid.

I'd like to stop my webservice gracefully, meaning that it finishes open connections and refuses new ones.

Spray-can page on github recommends to send it an Akka PoisonPill message. Ideally I'd like to initiate it from command line, as simple as possible. I thought maybe to attach one more HTTP server instance bound to localhost only, and having some rest methods to stop, and maybe diagnose the webservice. Is it feasible? What are the other options?

UPDATE: I added what I can imagine have to work, based on answers, but it seems not to, at least I've never seen any message I expected to see in stdout or log. Actually, I've tried in variations HttpUnbind, PoisonPill, together and by one. May anyone with a hard akka eye look at this? PS. The hook itself is called successfully, checked it. Signal I send to jvm is SIGTERM.

/* Simple reaper actor */
class Reaper(refs: ActorRef*) extends Actor {
private val log = Logging(context.system, this)
val watched = ArrayBuffer(refs: _*)

refs foreach context.watch

case Terminated(ref) =>
watched -= ref
log.info(s"Terminated($ref)") println(s"Terminated($ref)")
if (watched.isEmpty) {
log.info("Shutting dow the system")
println("Shutting dow the system")
system.shutdown()
}
}
}

// termination hook to gracefully shutdown the service
override def run() = {
val reaper = system.actorOf(Props(new Reaper(IO(Http), service)))
//IO(Http) ? Http.Unbind(5.minutes)
IO(Http) ! PoisonPill
}
})


UPDATE2: So, somehow it works, namely - when PoisonPill is sent all current HTTP connections got closed. But I'd rather to stop receiveing new connections, and wait for open to return response and close.

VERDICT: It seems that akka has its own hook, because, despite my hook gets executed, actors got killed and all connections got closed without my actions. If someone will offer solution with JVM shutdown hook it will be great. I suggest that this is important problem, and very sadly it has no any good recipe online. For a meanwhile I will try to implement graceful shutdown using tcp/http.

#### joda DateTime format cause null pointer error in spark RDD functions

The exception message as following

at play.api.Application$class.handleError(Application.scala:296) ~[play_2.11-2.3.8.jar:2.3.8] at play.api.DefaultApplication.handleError(Application.scala:402) [play_2.11-2.3.8.jar:2.3.8] at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun3$$anonfun$applyOrElse$4.apply(PlayDefaultUpstreamHandler.scala:320) [play_2.11-2.3.8.jar:2.3.8] at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun3$$anonfun$applyOrElse$4.apply(PlayDefaultUpstreamHandler.scala:320) [play_2.11-2.3.8.jar:2.3.8] at scala.Option.map(Option.scala:146) [scala-library-2.11.6.jar:na] Caused by: java.lang.IllegalArgumentException: bound must be positive at java.util.Random.nextInt(Random.java:388) ~[na:1.8.0_40] at scala.util.Random.nextInt(Random.scala:66) ~[scala-library-2.11.6.jar:na]  ### TheoryOverflow #### Pinhole Camera Problem! [on hold] I was reading slides of a university course and in one of the slides it says 1 Eyes/cameras can’t have VERY small holes because that limits the amount of entering light 2 and diffracts/bends the light Bending of light(diffraction) only takes place when light goes near the edge of the surface of the object(e.g. water droplets in cloud) but in the case of the camera or eye light passes through the eye lens or camera lens(not near the surface of the lens), refraction takes place instead of diffraction in the case of eye/camera, so my question is how they are saying that if eyes/cameras have very small holes they will diffracts/bends the light? #### Is there a typed lambda calculus which is consistent and Turing complete? Is there a typed lambda calculus where the corresponding logic under the Curry-Howard correspondence is consistent, and where there are typeable lambda expressions for every computable function? This is admittedly an imprecise question, lacking a precise definition of "typed lambda calculus." I'm basically wondering whether there are either (a) known examples of this, or (b) known impossibility proofs for something in this area. ### /r/clojure #### Help figuring out how to solve my problem with a macro I'm writing a web application using Ring/Compojure and one of my routes looks like this: (POST "/api/post" request (if-not (authenticated? request) {:status 403} (handle-create-post request)))  I realized I want the authentication logic to be reproduced on several routes that the user needs to be authorized to access and thought a macro that looked like this: (authroute (POST "/api/post" request (handle-create-post request)))  would be nice. This doesn't work, but I'm trying to scrape together something like this:  (defmacro authroute "Takes a Compojure route definition and returns a form that also checks if the user is authenticated and returns a 403 error if they are not." [expression] '(~@(butlast expression) (if-not (authenticated? ~(nth expression 2)) {:status 403} ~(last expression))))  What am I doing wrong here? Is this a good use of macros? submitted by Tortoise_Face [link] [10 comments] ### StackOverflow #### Apache-Spark Internal Job Scheduling I came across the feature in Spark where it allows you to schedule different tasks within a spark context. I want to implement this feature in a program where I map my input RDD(from a text source) into a key value RDD [K,V] subsequently make a composite key valueRDD [(K1,K2),V] and a filtered RDD containing some specific values. Further pipeline involves calling some statistical methods from MLlib on both the RDDs and a join operation followed by externalizing the result to disk. I am trying to understand how will spark's internal fair scheduler handle these operations. I tried reading the job scheduling documentation but got more confused with the concept of pools, users and tasks. What exactly are the pools, are they certain 'tasks' which can be grouped together or are they linux users pooled into a group What are users in this context. Do they refer to threads? or is it something like SQL context queries ? I guess it relates to how are tasks scheduled within a spark context. But reading the documentation makes it seem like we are dealing with multiple applications with different clients and user groups. Can someone please clarify this? ### CompsciOverflow #### "Practical forms" of Chernoff bound for inequality in expectation From Wikipedia: The above formula is often unwieldy in practice, so the following looser but more convenient bounds are often used: (i)$Pr(X\geq (1+\delta)\mu)\leq e^{-\frac{\delta^2\mu}{3}}, 0<\delta<1$(ii)$Pr(X\leq (1-\delta)\mu)\leq e^{-\frac{\delta^2\mu}{2}}, 0<\delta<1$The assumption they use is$E[X]=\mu$. Would (i) still hold if we only assume$E[X]\leq \mu$? Would (ii) still hold if we only assume$E[X]\geq\mu$? If not, what "practical forms" do we have in these cases? ### StackOverflow #### How to import .txt file in Scala Hi im new to programming, how do i import a .txt file? My code cant find the file, is there any specific directory it has to be put into? My code: object Zettel01 extends App { import scala.io.Source object Suchtest { val gesch = Source.fromFile("DieUnendlicheGeschichte.txt").getLines() for (w <- gesch) println(w) } }  I have tried different code but the problem is always the same, i cant find the .txt file... Thanks in advance for any help Flurry1337 #### How can I capture the standard output of clojure? I have some printlns I need to capture from a Clojure program and I was wondering how I could capture the output? I have tried: (binding [a *out*] (println "h") a )  : but this doesn't work ### CompsciOverflow #### sliding window algorithm implementation I am having trouble figuring how to derive the numbers to the solution to the question. I am following the steps, however my numbers do not come near that of the solution. Can someone give a concise step by step way to figuring out both problems. Solution ### /r/emacs #### Work in progress light minimal SVG modeline... ### Lobsters #### Haskell at Front Row #### The Lava Layer Anti-Pattern ### StackOverflow #### Bizarre Swift Compiler Error: "Expression too complex" on a string concatenation I find this amusing more than anything. I've fixed it, but I'm wondering about the cause. Here is the error: DataManager.swift:51:90: Expression was too complex to be solved in reasonable time; consider breaking up the expression into distinct sub-expressions. Why is it complaining? It seems like one of the most simple expressions possible. The complier points to the columns + ");"; section func createTableStatement(schema: [String]) -> String { var schema = schema; schema.append("id string"); schema.append("created integer"); schema.append("updated integer"); schema.append("model blob"); var columns: String = ",".join(schema); var statement = "create table if not exists " + self.tableName() + "(" + columns + ");"; return(statement); }  the fix is: var statement = "create table if not exists " + self.tableName(); statement += "(" + columns + ");";  this also works (via @efischency) but I don't like it as much because I think the ( get lost: var statement = "create table if not exists $$self.tableName()) (\(columns))" ### Planet Theory #### TR15-073 | Lower Bounds for Sums of Products of Low arity Polynomials | Neeraj Kayal, Chandan Saha We prove an exponential lower bound for expressing a polynomial as a sum of product of low arity polynomials. Specifically, we show that for the iterated matrix multiplication polynomial, IMM_{d, n} (corresponding to the product of d matrices of size n \times n each), any expression of the form IMM_{d, n} = \sum_{i=1}^{s} \quad \prod_{j=1}^{m} \quad Q_{ij}, where the Q_{ij}'s are of arity at most t \leq \sqrt{d} (i.e. each Q_{ij} depends on at most t variables), the number of summands s must be at least d^{\Omega\left( \frac{d}{t} \right)}. A special case of this problem where the Q_{ij}'s are further restricted to have degree at most one was posed as an open problem by Shpilka and Wigderson [SW99] and recently resolved in [KS15]. We show that a refinement of the argument in [KS15] yields the above-mentioned lower bound on s, regardless of the degrees of the Q_{ij}'s (and also regardless of m, the number of factors in each summand). Lower bounds for the same model were also obtained in an almost simultaneous but independent work by Kumar and Saraf [KS15b]. ### StackOverflow #### How do I get keepWhen behaviour in Elm 0.15? The keepWhen function from earlier versions of Elm was removed. I have ported an Elm application from 0.14, but I'm stuck at trying to get one part of it to work, where it's using keepWhen. So basically I'm looking for a function like keepWhen : Signal Bool -> a -> Signal a -> Signal a  I have found filter : (a -> Bool) -> a -> Signal a -> Signal a  but it's not quite the same thing, and I haven't figured out how to get it to work. ### QuantOverflow #### Confused on interpretation of betas/alphas in regression in finance I ran a regression on two stocks. I don't have the data in front of me, but it is a more conceptual question. Let's say SP500 returned a total 23% return over this time period and MSFT returned 9%. I ran the regression in R:  summary(lm(MSFT~SP500,data=mydata))  The coefficients show an intercept of around .003 and coefficient of around 1.5 for the SP500. The beta is statistically significant at around the 99.9% confidence and the intercept is NOT - only around 38% interval. Now my understanding of 'alpha' or the intercept - is that it is the 'excess return' that could be gained by investing in a strategy. I am confused how alpha could ever be positive if the Y value that you are comparing (MSFT) is less than the X (SP500). Here each 1% change in the Sp500 returned a 1.5% in microsoft, but to actually have a positive alpha - even a minuscule amount and even though its not statistically significant - is hurting my head. If anyone could just explain the relationship of the intercept in a basic regression like this and practical relationship of the betas I would appreciate it. Would I ever get a positive alpha when MSFT's return is LESS than the SP500 and MSFT=Y, SP500=X? ### StackOverflow #### Solve Coin Change Problems with both functional programming and tail recursion? I know how to solve Coin Change Problem with both imperative programming and DP. I know how to solve Coin Change Problem with both FP and non-tail-recursion. But it will compute the same problem multiple times, which lead to inefficiency. I know how to compute fibonacci number with both FP and DP/tail-recursion. Lots of articles use this example to explain "FP can be combined with DP" and "recursion can also be as efficient as loop". But I don't know how to solve Coin Change Problem with both FP and DP/tail-recursion. I think it's strange that articles on imperative programming will always mention the inefficiency of computing the same problem multiple times on Coin Change Problem, while those on FP just omit it. In a more general sense, I wonder whether FP is powerful enough to solve such kind of "two dimensional" problem, while fibonacci is a "one dimensional" problem. Can anybody help me? Coin Change Problem: Given a value N, if we want to make change for N cents, and we have infinite supply of each of S = { S1, S2, .. , Sm} valued coins, how many ways can we make the change? The order of coins doesn’t matter. For example, for N = 4 and S = {1,2,3}, there are four solutions: {1,1,1,1},{1,1,2},{2,2},{1,3}. So output should be 4. For N = 10 and S = {2, 5, 3, 6}, there are five solutions: {2,2,2,2,2}, {2,2,3,3}, {2,2,6}, {2,3,5} and {5,5}. So the output should be 5. I meet this problem while I'm learning Scala, so use Scala to demonstrate the code if it's possible. Lisp will also be good. But I know little about Haskell. #### Deserializing to case classes in Scala script using Jackson I have the following case classes in my Scala script. case class Story(kind:String, id:String, created_at:String, updated_at:String, accepted_at:String, story_type:String, name:String, description:String, current_state:String, requested_by_id:Long, estimate:Option[Int], project_id:Long, url:String, owner_ids:List[Long], labels:List[Label], owned_by_id:Long) case class Label(id:Long, project_id:Long, kind:String, name:String, created_at:String, updated_at:String)  This is the mapper configuration. val mapper = new ObjectMapper() with ScalaObjectMapper mapper.registerModule(DefaultScalaModule)  I am using Jackson Scala Module to deserialize response data from a REST api into these case classes. These work fine in a Scala file in a project. But when I try to use the same in a Scala script, they become anonymous inner classes as the whole script gets wrapped in an object. Since Jackson does not deserialize inner classes, it throws this exception. com.fasterxml.jackson.databind.JsonMappingException: Can not deserialize Class Mainanon2JsonMappingStory (of type local/anonymous) as a Bean  Here the case classes are inner classes, if only I could make them static, they would work with Jackson. But it doesn't seem to be possible in Scala. Is there any workaround for this? #### For multiple generators to handle Seq I'm a new to scala, and I want to unique a Seq[(Int,Int)] by the first component, my code as follow: val seq = Seq((1,1), (0,1), (2,1), (0, 1), (3,1), (2,1)) val prev = -1 val uniqueSeq = for(tuple <- seq.sortBy(_._1) if !tuple._1.equals(prev); prev = tuple._1) yield tuple  but why the result is uniqueSeq: Seq[(Int, Int)] = List((0,1), (0,1), (1,1), (2,1), (2,1), (3,1))  #### Can I add class fields dynamically using macros? I'm new to Scala macros, so sorry if this is an obvious question. I wonder if the following is even possible before I dig in deeper. Let's say I have a class named DynamicProperties Is it possible to add members to the class based on something like this? val x: DynamicProperties = ... x.addProperty("foo", 1) x.addProperty("bar", true) x.addProperty("baz", "yep")  and have it be translated somehow to a class that looks like this more or less? class SomeName extends DynamicProperties { val foo: Int = 1 val bar: Boolean = true val baz: String: yep }  I guess this can be done via reflection, but I want people who use this, to have auto complete when they type x. that will fit what they did earlier using the addProperty method. Is this possible using Scala marcos? I want to try to implement it, but it will be good to know if this is going down a dead end path or not. ### /r/scala #### Interview Question: Reverse a linked-list (Scala solution) ### TheoryOverflow #### NP Complete equivalent to finding the minimum subset of "target" vertices of a bipartite graph, to cover the maximum number of "source" vertices I apologize for the wording of the question. I'm pretty new to theoretical CS; I've been a software engineer for most of my professional career, and I've just started a PhD program. Consider the following graph: In this graph, each vertex that of the form A_i represents an adversary's pure strategy. Every vertex that can be immediately reached from an A_i vertex represents the set of targets attacked by the adversary in this strategy. So, for example, if you look at A_1, the targets attacked by the adversary are \{o, b, a, h\}. If a defender places a resource on a target, then any pure strategy used by the adversary to attack that target, fails. So if the defender has k resources, he needs to figure out how he can place them on up to k targets so that he blocks a majority of the attacker's pure strategies. The attacker's mixed-strategy is known to the defender, and so he knows the probability with which the attacker will play any one of his pure strategies. Each pure-strategy also has a payoff associated with it. If the attacker is successful when playing his strategy, he will receive a payoff corresponding to that strategy. The defender will receive a negative payoff of the same amount. Hence, the effective negative payoff for the defender for any attacker strategy is the probability of the attacker playing that strategy, multiplied by the payoff for playing that strategy. The LP representing this has a negative constant in the objective function that represents the defender's payoff. This negative constant is equal to the maximum payoff the attacker can expect given his mixed strategy. Therefore, the best that the defender can do is 0, and so the LP tries to find the targets to block so that the payoff for the defender is as close to zero as possible. In the above picture, I was able to solve the linear program to find the optimal solution \{h, p, j\}. By placing resources on these targets, he is able to block the following adversary strategies: \{A_0, A_1, A_2, A_3, A_6, A_7\}. The actual problem is an optimization problem, so the defender has to find up to k resources that blocks all the attacker strategies to give him the maximum payoff. I know there are a few NP complete problems on bipartite graphs, but I am having a hard time figuring out which one I can apply here; I'm very new to this. ### Lobsters #### Debian 8 "Jessie" released ### StackOverflow #### how can access to the new javascript keyword from scala.js? I am porting a JavaScript library to Scalajs. The JS Objects are created with the new keyword in the JavaScript side so this is what I do in most case. trait Point extends js.Object { def normalize(length: Double): Point = js.native }  This seems to work well however, I don't get the same easiness for the constructor. @JSName("paper.Point") object PointNative extends js.Object { def apply(props: RectProps): Rectangle = js.native }  The above code does not work. I surely type checks and it compiles but at runtime it returns undefined. If I modified PointNative like this then all is good. import js.Dynamic.{ global => g, newInstance => jsnew } object PointNative { def apply(props: RectProps): Rectangle = jsnew(g.paper.Point)(props).asInstanceOf[Point] }  Is there a way to use @JSName and js.native with the new keyword? Thanks! ### CompsciOverflow #### Pinhole Camera Problem! [on hold] I was reading slides of a university course and in one of the slides it says 1 Eyes/cameras can’t have VERY small holes because that limits the amount of entering light 2 and diffracts/bends the light Bending of light(diffraction) only takes place when light goes near the edge of the surface of the object(e.g. water droplets in cloud) but in the case of the camera or eye light passes through the eye lens or camera lens(not around the surface of the lens), refraction takes place instead of diffraction, so my question is how they are saying that if eyes/cameras have very small holes they will diffracts/bends the light? ### QuantOverflow #### Binomial tree vs trinomial tree in pricing options Very new to pricing models. Is there a general guideline when to use binomial tree and when trinomial tree is preferred? As far as I know, unlike binomial tree, trinomial tree only gives a range instead of a unique value. Thank you. ### Lobsters #### fartscroll.js | Everyone farts. And now your web pages can too. ### StackOverflow #### How to create a typeclass instance for any subclass of Traversable in Scala I've created a toy example to illustrate a compiler error that I don't understand. Shouldn't the implicit conversion from C[_] <: Traversable[T] with Safe[T] to Safe[C[T]] apply? import scala.language.{implicitConversions, higherKinds} class ToyExample { implicit val stringsafe = new Safe[String] { override def stringify(value: String): String = value } def main(args: Array[String]): Unit = { val a: String = "c" val b: Seq[String] = Seq("1", "2", "3") safe(a) safe(b) // why is this a compiler error? } def safe[T](value: T)(implicit safe: Safe[T]): String = safe stringify value } object Safe { implicit def safeTraversable[C[_] <: Traversable[T], T](implicit safe: Safe[T]): Safe[C[T]] = new Safe[C[T]] { override def stringify(value: C[T]): String = value.map(safe.stringify).toString() } } trait Safe[T] { def stringify(value: T): String }  ### CompsciOverflow #### Linearizability and Serializability in context of Software Transactional Memory I've been trying to grasp serializability and linearizability in the context of software transactional memory. However, I think both notions can be applied to transactional memory in general. At this point, the following is my understanding of both subjects. # Serializability Serializability is a global property. It is a correctness property of transactions. Given k processes that each execute a transaction Tkconcurrently, serializability ensures that there is a sequence of transactions that can be executed in sequence (i.e., one after the other) such that the end result is the same as the transactions being executed concurrently. So there is a permutation of the list (T1, T2,..,Tk) that defines a list of transactions that can be executed sequentially. This property makes perfect sense to me and I think my definition is correct. I based this definition on the text in "The Art of Multiprocessor programming" by Herlihy and Shavit. # Linearizability Linearizability is a local property of concurrent objects (e.g., an instance of a class that is shared amongst threads). Linearizability ensures that when two processes each execute a series op method calls (e.g., queue or dequeue on a Queue instance) on that shared object there is a sequential ordering of these method calls that does not necessarily preserve program order (the order in which the programmer wrote them down), but each method call does seem to happen instantly (i.e., invocation and response follow each other directly), whilst maintaining the result of each method call individually and consequently the object its state. # Question According to a paper "On the correctness of TM" by Guerraoui and Kapalka this is the definition of linearizability in context of TM: .. a safety property devised to describe shared objects, is sometimes used as a correctnes criterian for TM. In the TM terminology linearizability means that intuitively, every transaction should appear as if it took place at some single unique point in time during its lifespan. This definition just seems to resemble serializability to me. But the paper further on defines serializability as follows: .. is one of the most commonly required properties of a database transaction. Roughly speaking, a history H of transactions (i.e., the sequence of all operations performed by all transactions in a given execution) is serializable if all committed transactions in H issue the same operations and receive the same responses as in some sequential history S that consists of only the committed transactions in H. (A sequential history is one without concurrency between the transactions). This definition however seems to imply that one can reorder statements from transactions in such a way that they are interleaved. (I.e. reorder the statements such that not all the statements of a transaction T appear in sequence in H). I am under the assumption that my personal above definitions are correct. My actual question is how linearizability is defined in the context of transactional memory. It makes no sense to reason about each method call (i.e., read/write operation) in a transaction individually as this would break the semantic of transactional memory. Nor would it make sense to having to reason about two concurrent transactions their interleavings, as this would obviously break serializability. Does linearizability mean that one can reorder individual operations inside a transaction? If linearizability is a stronger form of serializability, it should not matter in which order the operations are executed inside a single transaction. In short: First of all, is my understanding of serializability and linearizability correct? I am confused after reading a plethora of definitions in different works. And second, what does it mean for a set of transaction to be linearizable? I have also read the question that was linked inside of the comments. But it did not explain to me my specific question. # Sources ### StackOverflow #### Send a WS request for each URL in a list and map the responses to a new list I'm developing a REST server in Play with Scala, that at some point needs to request data at one or more other web services. Based on the responses from these services the server must compose a unified result to use later on. Example: Event C on www.someplace.com needs to be executed. In order to execute Event C, Event A on www.anotherplace.com and Event B on www.athirdplace.com must also be executed. Event C has a Seq(www.anotherplace.com, www.athirdplace.com) from which I would like to iterate and send a WS request to each URL respectively to check wether B and C are executed. It is assumed that a GET to these URLs returns either true or false How do I collect the responses from each request (preferably combined to a list) and assert that each response is equal to true? EDIT: An event may contain an arbitrary number of URL's. So I cant know beforehand how many WS requests i need to send. ### /r/compsci #### What is a good alternative to Hadoop for smaller datasets? I'm currently working on designing an analytics system for a startup. Hadoop seems like the obvious choice, but after researching it I'm not so sure. The files I need to manage will be CSV files that are all about 8MB each. Since the HDFS is supposed to break files into 64MB blocks, it seems vastly overpowered for what I'll be working with. Is Hadoop still a good option, or is there another program/framework that would be more ideal for working with smaller files? submitted by ChinchillaSanchez [link] [41 comments] ### StackOverflow #### Suppressing sbt debug output How do I suppress SBT's debug messages? They are logged to stdout so running a project produces this:  cat src/main/scala/Hello.scala object Hello { def main(args: Array[String]) { Console.println("Hello sbt") } } sbt run > out.txt cat out.txt [info] Set current project to hello (in build file:/home/synapse/projects/algs2/) [info] Running Hello Hello sbt [success] Total time: 1 s, completed May 14, 2013 11:39:23 PM  ### Planet Emacsen #### Emacs Life: Download / Install — eclim (eclipse + vim) #### Emacs Life: auto-complete/auto-complete.el at master · auto-complete/auto-complete ### Lobsters #### OpenBSD's new implementation of the file(1) utility ### Fefe #### So ein bisschen schmierig wirkt das ja schon gerade, ... So ein bisschen schmierig wirkt das ja schon gerade, wie die ganzen pompösen Länder der Türkei ein Eingeständnis abringen wollen, dass sie die Schuld an einem Völkermord tragen. Ich bin da selber ein bisschen zwiegespalten. Auf der einen Seite finde ich es eine Schande, dass die Türkei das nicht zugeben will. Das ist ein echt offensichtlicher Fall. Und wenn wir da eine Stelle hätte, die selber frei von Schuld ist, dann fände ich das gut, wenn die die Türkei kritisieren würden. Aber wer kritisiert denn hier die Türken? Deutschland kann selber nicht zugeben, dass der Kosovo-Krieg ein völkerrechtswidriger Angriffskrieg war. Die Bundeswehr beschreibt die Rechtslage. Seht ihr da irgendwo eine mögliche Rechtfertigung für Auslandseinsätze der Bundeswehr? Ich auch nicht. Deutschland sollte also mal ganz hinten in der Ecke stehen und die Schnauze halten, wenn es um moralische Fragen und das Völkerrecht geht. Auf der anderen Seite geht Deutschland vergleichsweise vorbildlich mit der eigenen Vergangenheit um. Ich bin immer noch sehr unzufrieden, aber wenn man mal in andere Länder geht, dann merkt man erst, wie gut wir es haben. Man schaue nur mal, wie die Briten mit ihrer Opium-Vergangenheit und China umgehen. Oder die Amerikaner mit ihren Atombombenabwürfen über Japan. Daher würde ich fast denken, die anderen Länder sollten auch mal gepflegt die Schnauze halten, wenn es darum geht, anderen Ländern Vorwürfe über deren Geschichtsaufarbeitung zu machen. Insofern ist meine Position sowas wie: Die Türkei hat hier schon Kritik verdient, aber von den üblichen Verdächtigen ist niemand in einer moralischen Position, die ihm erlauben würde, andere Länder zu kritisieren. Update: Oh, und wo wir gerade beim Anerkennen von Völkermord waren ... wie wäre denn mal, wenn Deutschland den Völkermord an den Herero und Nama anerkennen würde? Der Genozid wurde durch die von der Generalversammlung der Vereinten Nationen 1948 beschlossene Konvention über die Verhütung und Bestrafung des Völkermordes als Völkermord anerkannt. Die Bundesregierung nimmt zur Bewertung des Ereignisses unverändert keine Stellung und weist etwaige Verantwortung für einen Völkermord von sich. Aber die Türken anpupen, ja? ### /r/clojure #### Does anyone have the video for the figwheel presentation at Clojurewest? I was at the Bruce Hauman's Developing Clojurescript with Figwheel talk and was hoping to review it/ share it with some colleagues but it doesn't seem to be listed on ClojureTV. Does anyone know if it's available somewhere and if not, are there any plans to eventually publish it? submitted by Spiph [link] [2 comments] ### StackOverflow #### merge to set default values, but potentially expensive functions An idiomatic way to set default values in clojure is with merge: ;; merge can be used to support the setting of default values (merge {:foo "foo-default" :bar "bar-default"} {:foo "custom-value"}) ;;=> {:foo "custom-value" :bar "bar-default"}  In reality however, often the default values are not simple constants but function calls. Obviously, I'd like to avoid calling the function if it's not going to be used. So far I'm doing something like: (defn ensure-uuid [msg] (if (:uuid msg) msg (assoc msg :uuid (random-uuid))))  and apply my ensure-* functions like (-> msg ensure-uuid ensure-xyz). What would be a more idiomatic way to do this? I'm thinking something like: (merge-macro {:foo {:bar (expensive-func)} :xyz (other-fn)} my-map) (associf my-map [:foo :bar] (expensive-func) :xyz (other-fn))  #### Java io library: What is the difference between File.toString() and File.getPath() ... since it seems that both returns the same string - take a look at this Scala code: scala> val f = new File("log.txt") scala> f.getPath // res6: String = log scala> f.toString // res7: String = log  #### Play Framework how to sort collection in repeat() form helper? Based on the Play (java) documentation, let's say I have the following example: public class UserForm { public String name; public List<MyClass> itmes; }  and @helper.inputText(userForm("name")) @helper.repeat(userForm("items"), min = 1) { itemField => @helper.inputText(itemField) }  However, in MyClass I have an overridden implementation of compareTo(). I also have a getter getSortedItems() that will return the list in the proper sorted order. Currently, using the repeat() helper does not get my list of items in the ordering that I want. Is there a way to specify the ordering for the repeat() helper? Or can I give it a List as a parameter? It seems like this would be possible to do in Scala. Any help would be appreciated, thanks! ### /r/emacs #### How to make evil text objects with common text in delimiters? I'd like to make a LaTeX environment text object for evil. Based on some code I found on StackOverflow, I tried the following: (defmacro define-and-bind-text-object (key start-regex end-regex) (let ((inner-name (make-symbol "inner-name")) (outer-name (make-symbol "outer-name"))) (progn (evil-define-text-object ,inner-name (count &optional beg end type) (evil-select-paren ,start-regex ,end-regex beg end type count nil)) (evil-define-text-object ,outer-name (count &optional beg end type) (evil-select-paren ,start-regex ,end-regex beg end type count t)) (define-key evil-inner-text-objects-map ,key (quote ,inner-name)) (define-key evil-outer-text-objects-map ,key (quote ,outer-name))))) (define-and-bind-text-object "e" "\\\\begin{\\(\\w+\$$}" "\\\\end{\$$\\w+\$$}")  Unfortunately, this code cannot cope with nested environments, like the following: \begin{foo} \begin{bar} blah blah blah \end{bar} \end{foo}  What I need to do is make the text in the braces consistent between the beginning and ending delimiter. I tried using regex groups, but that didn't help. submitted by BLAND_AS_OVALTINE [link] [comment] ### CompsciOverflow #### An algorithm to compute a set of states that satisfy a specific CTL formula Working through a past exam question and I'm unsure where to start or what form they want the answer in: Define an algorithm that receives as input a finite transition system TS defined over the set of actions {a, b, c} and computes the set of states of TS that satisfy the CTL formula ∃a U b. Would anybody be able to give me a kick-start or a walk-through on how to answer this? Really appreciate any help. edit: My attempt based on Klaus' answer for all executions in TS for all states if b holds add current state to the stack, EXISTS = TRUE else if a holds if current state == next state in execution do nothing else if next state contains a or b add current state to the stack, EXISTS = TRUE else do nothing ### HN Daily #### Daily Hacker News for 2015-04-25 The 10 highest-rated articles on Hacker News on April 25, 2015 which have not appeared on any previous Hacker News Daily are: ## April 25, 2015 ### /r/emacs #### Whitespace mode I have a setup for whitespace mode that I find very useful. It highlights chars in lines over 80 characters. Here's the config to get that: (setq whitespace-line-column 80) (setq whitespace-style '(face lines-tail)) (add-hook 'prog-mode-hook 'whitespace-mode)  What I'd like is to have a function that makes space and tab characters show up, but I can't manage to get it working. As far as I can tell I need to something like (defvar whitespace-full-list '(face tabs spaces trailing lines space-before-tab newline indentation empty space-after-tab space-mark tab-mark newline-mark)) (defun better-whitespace () (interactive) (let ((whitespace-small-list '(face lines-tail)) (whitespace-big-list whitespace-full-list)) (if (eq whitespace-style whitespace-small-list) (setq whitespace-style whitespace-small-list) (setq whitespace-style whitespace-small-list)))) ;; just to test (define-key prog-mode-map (kbd "C-c w") 'better-whitespace)  Has anybody else dealt with something like this? EDIT: I have a working version. Here's the code it took, if anybody's interested in something similar. Thanks to /u/lawlist for helping! (defun better-whitespace () (interactive) (whitespace-mode -1) (let ((ws-small '(face lines-tail)) (ws-big '(face tabs spaces trailing lines-tail space-before-tab newline indentation empty space-after-tab space-mark tab-mark newline-mark))) (if (eq whitespace-style ws-small) (setq whitespace-style ws-big) (setq whitespace-style ws-small))) (whitespace-mode 1)) (define-key prog-mode-map (kbd "C-c w") 'better-whitespace)  submitted by nautola [link] [5 comments] ### TheoryOverflow #### Equivalent NP-Complete problem for this linear program I have a linear-program that solves for the defender's best response against a follower's mixed-strategy. In lpsolve, the problem is formulated as follows: max: 1.62 z_i0 + 0.85 z_i1 + 0.25 z_i2 + 0.28 z_i3 - 3; b + e + f + g + j + n + q + r + x <= 3; z_i0 - r - f - n - e - j <= 0; z_i1 - f - q - b <= 0; z_i2 - x <= 0; z_i3 - x - g <= 0; z_i0 >= 0; z_i0 <= 1; z_i1 >= 0; z_i1 <= 1; z_i2 >= 0; z_i2 <= 1; z_i3 >= 0; z_i3 <= 1; bin b; bin e; bin f; bin g; bin j; bin n; bin q; bin r; bin x; $z_{i0}$,$z_{i1}$,$z_{i2}$,$z_{i3}$are variables that can either be 0 or 1. If for the$j^{th}$follower strategy$z_{ij}$is 1, it means that the defender has at least one resource assigned to a target that is attacked by the attacker in his$j^{th}$strategy.$b$,$e$,$f$, ...,$x$are binary variables that simply say whether the defender has allocated a resource to this target. If the value is 1, it means there is a resource allocated to this target. The coefficients of each of the$z_{ij}$variables is just the product of the probability that the attacker plays the$j^{th}$strategy, and the reward the attacker gets for successfully playing that strategy. The defender gets a negative payoff if the attacker is successful, so the$3$at the end is just the maximum payoff that the attacker can get, which we subtract from the value of the objective function. The best the defender can do is get a payoff of 0 (we can see that if every$z_{ij}$is 1, the defender gets the best payoff of 0). The idea is to figure out which targets the defender can optimally cover with up to$k$resources. I'm trying to figure out which NP Complete problem I can reduce this to. ### QuantOverflow #### Time-independent local volatility Suppose somebody provides us with a surface of European call prices$C(\tau,K)$where$\tau$stands for time-to-maturity and$K$for the strike. By Dupire's results, there is a unique local volatility function$\sigma(\tau,K)$that generates these prices, and it can be expressed from them as $$\sigma(\tau,K) = \frac{2C_\tau}{K^2C_{KK}},$$ here for simplicity I am assuming that interest rate is zero. Now, if we just have$C(T,K)$for a single maturity$\tau = T$, is that true that there exists a unique time-independent local volatility$\sigma(K)$that generates this price at that maturity? In case it does, is there an analytic formula for that function? ### CompsciOverflow ####$\mathsf{PP}$compared to$\mathsf{\#P}$Since know that$\mathsf{TC^0\subsetneq PP}$, I wonder if we also know that$\mathsf{TC^0\subsetneq\#P}$? I understand that$\mathsf{\#P}$is in counting hierarchy. ### StackOverflow #### "object index is not a member of package views.html" when opening scala play project in scala ide I've created a play project with play 2.3.7 In the project directory, I ran activator and ran the eclipse command to generate eclipse project files. When I go to eclipse (I'm using the Scala IDE from typesafe Build id: 4.0.0-vfinal-20150119-1023-Typesafe , there is an error in my Application.scala file: object index is not a member of package views.html  Is there something amiss with my setup? The app runs fine when I execute run at the activation console prompt. Thanks! EDIT: Added code package controllers import play.api._ import play.api.mvc._ object Application extends Controller { def index = Action { Ok(views.html.index("Your new application is ready.")) } }  The error is on the 'Ok..' line. There is a file in views called index.scala.html, and the app runs file when I run it from activator.. #### Applicative Functor instance for functions with the same result type in Scala I'm trying to implement the functor and applicative instances for functions with the same initial type in Scala. My current implementation of the two type classes is: trait Functor[F[_]] { def fmap[A,B](f: A => B): F[A] => F[B] } trait Applicative[F[_]] extends Functor[F] { def pure[A](a: A): F[A] def apply[A,B](f: F[A => B]): F[A] => F[B] }  The problem that I have is that Function1 has two type parameters and I think I cannot fix only one of them. Is it possible to do this in Scala? #### How to set linux environment variables with ansible Hi i am trying to find out how to set environment variable with Ansible. something that a simple shell command like this: EXPORT LC_ALL=C  tried as shell command and got an error tried using the environment module and nothing happend. what am i missing ### QuantOverflow #### A problem involving random walks from Shreve Problem 5.4i in Shreve examines a symmetric random walk. Let$\tau_2 $be the first time that the random walk reaches 2. For$\alpha\in (0, 1) $, we are given that $$E [\alpha ^ {\tau_2}] =\sum_{k = 1} ^\infty (\alpha/2) ^ {2k}\frac{(2k)!}{(k+1)!k!}$$ It's clear that $$E [\alpha ^ {\tau_2}] =\sum_{k = 1} ^\infty (\alpha) ^ {2k} P (\tau_2 = 2k)$$ It's therefore tempting to conclude that $$P (\tau_2 = 2k)=\frac{(2k)!}{(k+1)!k!}2^{-2k}$$ And indeed that is the answer given. But in general$\sum_i f_i g_i =\sum_i f_i h_i$does not imply that$g_i = h_i$and so I'm not sure how we can reach this conclusion. (Asked about specific circumstances where this conclusion is true here.) What am I missing? ### /r/systems #### "Optimizing TLS for High–Bandwidth Applications in FreeBSD" [PDF, 2015] ### TheoryOverflow #### Test DataSet Generator for Monotone NAE 3SAT Algorithm Can someone point to a configurable (var/clause count etc.) difficult problem set generator for testing an 3SAT algorithm that are challenging for current algorithms.. specifically NAE 3SAT data set if possible.. ### StackOverflow #### Closing over java.util.concurrent.ConcurrentHashMap inside a Future of Actor's receive method? I've an actor where I want to store my mutable state inside a map. Clients can send Get(key:String) and Put(key:String,value:String) messages to this actor. I'm considering the following options. 1. Don't use futures inside the Actor's receive method. In this may have a negative impact on both latency as well as throughput in case I've a large number of gets/puts because all operations will be performed in order. 2. Use java.util.concurrent.ConcurrentHashMap and then invoke the gets and puts inside a Future. Given that java.util.concurrent.ConcurrentHashMap is thread-safe and providers finer level of granularity, I was wondering if it is still a problem to close over the concurrentHashMap inside a Future created for each put and get. I'm aware of the fact that it's a really bad idea to close over mutable state inside a Future inside an Actor but I'm still interested to know if in this particular case it is correct or not? #### How to eliminate vars in a Scala class? I have written the following Scala class based on a corresponding Java class: The result is not good. It still looks Java-like, is replete with vars, is very long, and is not idiomatic Scala in my opinion. I am looking to shrink this piece of code, eliminate the vars and the @BeanHeader stuff. Here is my code:  import scala.collection.immutable.Map class ReplyEmail { private val to: List[String] = List() private val toname: List[String] = List() private var cc: ArrayList[String] = new ArrayList[String]() @BeanProperty var from: String = _ private var fromname: String = _ private var replyto: String = _ @BeanProperty var subject: String = _ @BeanProperty var text: String = _ private var contents: Map[String, String] = new scala.collection.immutable.HashMap[String, String]() @BeanProperty var headers: Map[String, String] = new scala.collection.immutable.HashMap[String, String]() def addTo(to: String): ReplyEmail = { this.to.add(to) this } def addTo(tos: Array[String]): ReplyEmail = { this.to.addAll(Arrays.asList(tos:_*)) this } def addTo(to: String, name: String): ReplyEmail = { this.addTo(to) this.addToName(name) } def setTo(tos: Array[String]): ReplyEmail = { this.to = new ArrayList[String](Arrays.asList(tos:_*)) this } def getTos(): Array[String] = { this.to.toArray(Array.ofDim[String](this.to.size)) } def getContentIds(): Map[_,_] = this.contents def addHeader(key: String, val: String): ReplyEmail = { this.headers + (key -> val) this } def getSMTPAPI(): MyExperimentalApi = new MyExperimentalApi } }  =--------------------- I appreciate any help in accomplishing this goal. Updated Code I made some small changes to the code, like introducing an Option[String] instead of a String case class ReplyEmail( to: List[String] = Nil, toNames: List[String] = Nil, cc: List[String], from: String, fromName: String, replyTo: String, subject: String, text: String, contents: Map[String, String] = Map.empty, headers: Map[String, String] = Map.empty) { def withTo(to: String): ReplyEmail = copy(to = this.to :+ to) def withTo(tos: List[String]): ReplyEmail = copy(to = this.to ++ to) def withTo(to: Option[String], name: Option[String]) = copy(to = this.to :+ to, toNames = toNames :+ name) def setTo(tos: List[String]): ReplyEmail = copy() def withHeader(key: String, value: String) = copy(headers = headers + (key -> value)) def smtpAPI = new MyExperimentalApi  } Now, the only problem I face is in this line: The error is: type mismatch: found: List[java.io.Serializable] required: List[String] def withTo(to: Option[String], name: Option[String]) = copy(to = this.to :+ to, toNames = toNames :+ name)  #### How to define a type that corresponds to Map[String, Option[String]]? Given the following method that takes a Map[String, Option[String]] parameter: def myMethod(m: Map[String, Option[String]]) = { ... }  how to define a new type MyMap that implements Map[String, Option[String]] so that the method looks like this: def myMethod(m: MyMap) = { ... }  #### Why does immutable.ParMap.mapValues return ParMap instead of immutable.ParMap? In my Scala 2.11.6 application I have defined an immutable.ParMap like so:  object Foos { val foos: immutable.ParMap[String, Foo] = immutable.ParMap( ... ) }  Later on, I'd like to create a new immutable.ParMap using the same keys, so I use mapValues:  val fooServices: immutable.ParMap[String, FooService] = Exchanges.exchanges mapValues (_.fooService)  Scala complains Expression of type ParMap[String, FooService] does not conform to expected type ParMap[String, FooService]. As a workaround I can use a for comprehension, which the compiler permits: val fooServices: immutable.ParMap[String, FooService] = for ((name, ex) <- Foos.foos) yield name -> foo.fooService  But it would nice to be able to use the library functions that are designed for this exact task. Why can't mapValues infer the most specific type here? #### How does the Cats library in Scala relate to scalaz? How does the Cats library relate to scalaz? The Cats project mentions it is descended from scalaz. ### CompsciOverflow #### Adleman's theorem to$\mathsf{P=BPP}$Adleman's theorem gives $$\mathsf{BPP\subseteq P/Poly}.$$ Why is this theorem considered progenitor to derandomization conjecture that$\mathsf{P=BPP}$? Does it mean Adleman's result could be considered as evidence $$\mathsf{BPP\subseteq P/Log}$$ is a realistic possibility? Anaalogously, does it mean $$\mathsf{NP\subseteq P/Poly}$$ could be considered as evidence $$\mathsf{NP\subseteq P/Log}$$ is a realistic possibility? Is there a satifactory answer without considering Impagliazzo-Wigderson's conditional$\mathsf{P=BPP}$result? ### Planet Theory #### EATCS honours three outstanding PhD theses with the first EATCS Distinguished Dissertation Awards The EATCS is proud to announce that, after examining the nominations received from our research community, the EATCS Distinguished Dissertation Award Committee 2015, consisting of Javier Esparza, Fedor Fomin, Luke Ong and Giuseppe Persiano (chair), has selected the following three theses for the EATCS Distinguished Dissertation Award for 2015: Each of the awards carries a monetary prize of 1000 Euros, provided by the EATCS. Each of the award-receiving dissertations will be published on line by the EATCS at http://www.eatcs.org/index.php/dissertation-award. Karl Bringmann's thesis consists of two parts: one dealing with Sampling from Discrete Distributions'' and one dealing with Computing Fréchet Distances.'' Sampling from a discrete probability distribution is a fundamental and classically studied problem. Bringmann's thesis contributes a deeper understanding of the amount of memory needed for sampling from a discrete probability distribution. The provided bound is tight for systematic data structures and for non-systematic data structures, the thesis shows that, quite surprisingly, with only 1 redundant bit it is possible to reply to queries in expected constant-time. In the second part of the thesis, Bringmann relates the computational complexity of computing the Frechet distance of two curves (a classical notion from Computational Geometry) with a variant, SETH', of the Strong Exponential Time Hypothesis. Specifically, if SETH' holds, then the Frechet distance of two curves cannot be computed in time strongly subquadratic. Skrzypczak’s thesis is about the use of descriptive set theory as a framework for investigating the omega-regular tree languages or equivalently the languages defined by formulas of Monadic Second Order logic with several successors. The thesis makes progress on long-standing open problems in the theory of automata: the characterizations of regular languages of infinite trees that are definable in weak monadic second-order logic and the Rabin-Mostowski index problem. For both problems, Skrzypczak's thesis provides solutions for notable special cases. Wootters' thesis approaches coding-theoretic problems from an analytic point of view, rather than an algebraic point of view and develops new tools for studying codes, makes several contributions and settles a few important open problems. Specifically, Wootters' thesis advances the understanding of two important topics in Coding Theory: List Decoding and Local Seconding. Regarding List Decoding, the thesis shows that random linear codes, over constant-sized alphabets, are optimally list-decodable (this answers a question asked by Elias over twenty years ago) and that there exist Reed-Solomon codes which are list-decodable beyond the Johnson bound (this answers a question asked by Guruswami and Sudan over 15 years ago). Regarding Local Decoding, the thesis gives a family of high-rate codes with local correctability that admits a sublinear-time decoding algorithm. ### StackOverflow #### Executing a function with a timeout What would be an idiomatic way of executing a function within a time limit? Something like, (with-timeout 5000 (do-somthing))  Unless do-something returns within 5000 throw an exception or return nil. EDIT: before someone points it out there is, clojure (with-timeout ... macro) but with that the future keeps executing that does not work in my case. #### Convert map of vectors to vectors of columns in Clojure Clojure newbie. Apologies if this has been answered, but searches have not turned up exactly what I need, and I am finding Clojure pretty overwhelming. :( I have a collection (or list or sequence or vector) of maps like so: { :name "Bob", :data [32 11 180] } { :name "Joe", :data [ 4 8 30] } { :name "Sue", :data [10 9 40] }  I want to create new vectors containing the data in the vector "columns" associated with keys that describe the data, like so: { :ages [32 4 10], :shoe-sizes [11 8 9], :weights [180 30 40] }  Actually, a simple list of vectors might be adequate, i.e.: [32 4 10] [11 8 9] [180 30 40]  If it's better/easier to make the original list into a vector, that's fine; whatever's simplest. ### CompsciOverflow #### The set of all vertices, such that each vertex in the set has a path to exactly$k$vertices I need to find algorithms for both undirected and directed graphs, with no assumption on them being connected. Also the algorithms must be$O(V+E)$, where the undirected one should not depend on$k$. I have ideas for both, i'm just not very sure about the directed version, because I'm not sure on how to approach the correctness proof. For the directed version, I thought about finding all the SCC's with two DFS runs, and then running a topological sort on the graph created by representatives from each SCC. Afterwards, i'll flip the edges and start a DFS from the end vertex. For every SCC node I visit i'll increment a counter by the number of nodes in the SCC I visited. If while vising a SCC node the counter shows$k+1nodes, all the nodes in the SCC will be added to the set. At this point the algorithm will go back as though we reached a leaf. For the undirected version, I can just do run a BFS/DFS for every disconnected component to find all the connected components. If a connected component has exactly k+1 nodes, then all the nodes in that component belong to the set. Any help regarding the directed version will be great, thanks! #### type theory notation troubles I'm working through "Types and Programming Languages" by Benjamin Pierce and I don't quite understand the notation. Particularly on Page 106, (chapter 9 Simply Typed Lambda-Calculus) there is a lemma (9.3.7): $$\text{If } \Gamma \vdash t:T \text{ and } x \notin dom(\Gamma) \text{, then } \Gamma, x:S \vdash t:T.$$ I understand the idea roughly but I can't quite read this statement out in english. How do I translate the turnstile symbol (\vdash in tex) in this context? In the book I've seen $$\vdash t:T$$ translated as t is a closed, well-typed term but I don't quite see how to apply that in this context. Looking around apparently the turnstile symbol is also used to mean provable. Gamma is the typing context and provides binding of variables and their types. t:T means the term t is has the type T. The function dom(gamma) is the set of variables bound by Gamma. So my best guess for translating this is as follows: If the term t having type T is a closed, well-typed term under the typing context gamma and x is not in the set of bound variables of gamma, then the term t having type T is still a closed, well-typed term in the typing context gamma after binding the term x having type S. Is this roughly right? It makes sense but I still feel a bit shaky on the translation. Anyone have any other suggestions, corrections, or comments? Thanks #### Are LALR tables equal to SLR tables if the grammar is SLR modulo precedence/associativity of operators? Consider a grammar G which is SLR(1) except for some shift/reduce conflicts which can be resolved by imposing some precedence or associativity of operators. Is it the case that the construction of the SLR(1) parser and the LALR(1) always yield the same table? If not, what happens if the grammar G is SLR(1)? I'm particularly interested in this question about the following grammar: \begin{align*} C &\to E \, \text{rel} \, E \mid C \& C \mid ! \, C \mid \text{pred}\, ( \,Es\, ) \\ E &\to \text{num} \mid \text{id} \mid E\, +\, E \mid E\, *\, E \mid - \,E \mid ( \,E \,) \\ Es &\to E \mid E\, ,\, Es \end{align*} Using a custom SLR and LALR table generator I obtain exactly the same table for the above grammar, with some shift/reduce conflicts that can be resolved by giving precedences in the order &, !, rel, +, *, - and left associativity to &, + and *. I'm in doubt whether my implementation is incorrect or the tables are really the same, and if this is the case whether there is some general rule that applies in certain circumstances. ### Planet Clojure #### Pantomime 2.6.0 is Released ## TL;DR Pantomime is a Clojure interface to Apache Tika. 2.6.0 is a minor release that upgrades Tika. ### Apache Tika 1.8 Apache Tika dependency has been upgraded to version 1.8. ## Change Log Pantomime change log is available on GitHub. ## Pantomime is a ClojureWerkz Project Pantomime is part of the group of libraries known as ClojureWerkz, together with • Langohr, a Clojure client for RabbitMQ that embraces the AMQP 0.9.1 model • Elastisch, a minimalistic Clojure client for ElasticSearch • Cassaforte, a Clojure Cassandra client built around CQL • Monger, a Clojure MongoDB client for a more civilized age • Welle, a Riak client with batteries included • Neocons, a client for the Neo4J REST API • Quartzite, a powerful scheduling library and several others. If you like Pantomime, you may also like our other projects. Let us know what you think on Twitter or on the Clojure mailing list. Michael on behalf of the ClojureWerkz Team ### /r/netsec #### Broken, Abandoned, and Forgotten Code, Part 1 ### Planet Clojure #### Quieter clojure.test Output If you use clojure.test then there is a good chance you’ve been annoyed by all the output when you run your tests in the terminal. When there is a test failure you have to scroll through pages of output to find the error. With release 0.9.0 of lein-test-refresh you can minimize the output of clojure.test and only see failure and summary messages. To enable this feature add :quiet true to the :test-refresh configuration map in either your project.clj or profiles.clj file. If you configure lein-test-refresh in ~/.lein/profiles.clj then turning on this feature looks like the following. 1  1 2  {:user {:plugins [[com.jakemccrary/lein-test-refresh "0.9.0"]] :test-refresh {:quiet true}}}  Setting up your profiles.clj like above allows you to move to Clojure project in your terminal, run lein test-refresh, and have your clojure.tests run whenever a file changes. In addition, your terminal won’t show the usual Testing a.namespace output. Below is what you typically see when running clojure.test tests in a terminal. I had to cut most of the Testing a.namespace messages from the picture. The following picture is with quiet mode turned on in lein-test-refresh. No more Testing a.namespace messages! No more scrolling through all your namespaces to find the failure! I just released this feature so i haven’t had a chance to use it too much. I imagine it may evolve to change the output more. 1. More configuration options can be found here ### QuantOverflow #### ex ante tracking error correlation between funds I have two portfolio's called Comb & Global. They both have the same investable universe lets says 3000 stocks & are measured against the same benchmark. So it is possible that both funds hold the same stocks. I would like to examine the correlation of the ex-ante between the two funds. I know I can calculate the ex-ante tracking error as below, te = sqrt((port_wgt - bm_wgt)' * cov_matrix * (port_wgt - bm_wgt))  I also know the correlation is calculated by  p = cov(x,y) / stdev(x) * stdev(y)  I was wondering the best way to calculate the ex ante correlation between the two funds? Is there a relationship between the two funds weights that I can make use of? Update I should have mentioned that the two portfolios are sub portfolios and are combined into one portfolio. So I wanted to see the correlation of the ex ante tracking error between the two sub portfolio's. I realised I can do the following, port_wgts - number_of_companies x 2 matrix cov_matrix - number_of_companies x number_of_companies matrix  so the below line will return a 2x2 covariance matrix. port_wgts' * cov_matrix * prt_wgts  So I have the variances of both sub portfolios - taking the square root of this gives me the tracking error for both. Convert the 2 X 2 covariance matrix to a correlation matrix by the following  D = Diag(cov_matrix)^(1/2) corr_matrix = D^-1 * cov_matrix * D^-1  So I now have the correlation between the two sub portfolios just using the weights. ### /r/netsec #### Unpacking CCTV Firmware ### DataTau #### Who are the Uninsured Americans? (interactive visualization) ### Lobsters #### Heuristics in Game Programming ### Overcoming Bias #### Financial Status At a finance conference last year, I learned this: Instead of saving money directly for their own retirement, many workers have their employers save for them. Those employers hire in-house specialists to pick which specialty consulting firms to hire. These consulting firms advise employers on which investment firms to use. And those investment firms pick actual productive enterprises in which to invest. All three of these intermediaries, i.e., employer, consultant, and investor, take a cut for their active management. Even employees who invest for themselves tend to pick at least one high fee intermediary: an active-management investment firm. Few take the low cost option of just directly investing in a low-overhead index fund, as recommended by academics for a half-century. I’ve given talks at many active-management investment firms over the years. They pay speakers very well. I’ve noticed that (like management consults) they tend to hire very visibly impressive people. They also give big investors a lot of personal quality time, to create personal relationships. Their top people seem better at making investors like them than at picking investments. One math-focused firm said it didn’t want more investors because investors all demand more face time and influence over investment choices. Since 1880 the fraction of US GDP paid for financial intermediation has gone from 2% to 8%. And: The unit cost [relative to asset income] of financial intermediation appears to be as high today as it was around 1900. This is puzzling. Advances in information technology (IT) should lower the physical transaction costs of buying, pooling and holding financial assets. Trading costs have indeed decreased, but trading volumes have increased even more, and active fund management is expensive. … Investors spend 0.67% of asset value trying (in vain on average, by definition) to beat the market. … While mutual funds fees have dropped, high fee alternative asset managers have gained market share. The end result is that asset management unit costs have remained roughly constant. The comparison with retail and wholesale trade is instructive. In these sectors … larger IT investment coincides with lower prices and lower (nominal) GDP shares. In finance, however, exactly the opposite happens. … A potential explanation is oligopolistic competition but … the historical evidence does not seem to support the naive market power explanation, however. (more) Our standard academic story on finance is that it buys risk-reduction, and perhaps also that we are overconfident in finance judgements. But it isn’t clear we’ve had much net risk reduction, especially to explain a four times spending increase. (In fact, some argue plausibly that those who take more risk don’t actually get higher returns.) On overconfidence, why would it induce such indirection, and why would its effects increase by such a huge factor over time? Finance seems to me to be another area, like medicine, schools, and many others, where our usual standard stories just don’t work very well at explaining the details. In such cases most economists just gullibly plow ahead trying to force-fit the standard story onto available data, instead of considering substantially different hypotheses. Me, I try to collect as many pieces of related puzzling data as I can, and then ask what simple but different stories might account at once for many of those puzzles. To me an obvious explanation to consider here is that we like to buy special connections to prestigious advisors. We look good when bonded to others who look good, and we treat investor relations as especially important bonds. We seem to get blamed less for failures via prestigious associates, and yet are credited for most of our success via them. Finally, we just seem to directly like prestigious associations, even when others don’t know of them. And we may also gain from associating with others who share our advisors. To explain the change in finance over time, I’ll try my usual go-to explanation for long-term changes in the last few centuries: increasing wealth. In particular, social bonds as a luxury that we buy more of when richer. This can explain the big increases we’ve seen in leisure, product variety, medicine, and schooling. So as we get rich, we spend larger fractions of our time socializing, we pay more for products with identities that can tie us to particular others, we spend more to assure associates that we care their health, and we spend more to visibly connect with prestigious associates. Some of those prestigious associates are at the schools we attend, the places we live, and via the products we buy. Others come via our financial intermediaries. This hypothesis suggests an ironic reversal: While we usually play up how much we care about associates, and play down our monetary motives, in finance we pretend to make finance choices purely to get money, while in fact we lose money to gain prestigious associates. ### StackOverflow #### Is it possible to showcase the different strategies of evaluation by modifying this simple reducer? I am the kind that prefers learning by looking at code instead of reading long explanations. This might be one of the reasons I dislike long academic papers. Code is unambiguous, compact, noise-free and if you don't get something you can just play with it - no need to ask the author. This is a complete definition of the Lambda Calculus: -- A Lambda Calculus term is a function, an application or a variable. data Term = Lam Term | App Term Term | Var Int deriving (Show,Eq,Ord) -- Reduces lambda term to its normal form. reduce :: Term -> Term reduce (Var index) = Var index reduce (Lam body) = Lam (reduce body) reduce (App left right) = case reduce left of Lam body -> reduce (substitute (reduce right) body) otherwise -> App (reduce left) (reduce right) -- Replaces bound variables of target by term and adjusts bruijn indices. -- Don't mind those variables, they just keep track of the bruijn indices. substitute :: Term -> Term -> Term substitute term target = go term True 0 (-1) target where go t s d w (App a b) = App (go t s d w a) (go t s d w b) go t s d w (Lam a) = Lam (go t s (d+1) w a) go t s d w (Var a) | s && a == d = go (Var 0) False (-1) d t go t s d w (Var a) | otherwise = Var (a + (if a > d then w else 0)) -- If the evaluator is correct, this test should print the church number #4. main = do let two = (Lam (Lam (App (Var 1) (App (Var 1) (Var 0))))) print reduce (App two two)


In my opinion, the "reduce" function above says much more about the Lambda Calculus than pages of explanations and I wish I could just look at it when I started learning. You can also see it implements a very strict evaluation strategy that goes even inside abstractions. On that spirit, how could that code be modified in order to illustrate the many different evaluation strategies that the LC can have (call-by-name, lazy evaluation, call-by-value, call-by-sharing, partial evaluation and so on)?

### TheoryOverflow

#### Adleman's theorem to $\mathsf{P=BPP}$

Adleman's theorem gives $$\mathsf{BPP\subseteq P/Poly}.$$

Why is this theorem considered progenitor to derandomization conjecture that $\mathsf{P=BPP}$?

Why does Adleman's result imply $$\mathsf{BPP\subseteq P/Log}$$ is a realistic possibility?

Does it mean $$\mathsf{NP\subseteq P/Poly\implies NP\subseteq P/Log}$$ is a realistic possibility?

#### What would signify hierarchy collapse to first level?

We know that $$\mathsf{NP\subseteq P/Poly \iff coNP\subseteq P/Poly\implies PH=\Sigma_2^P}=\mathsf{\Pi_2^P}$$ $$\mathsf{NP\subseteq P/Log\iff coNP\subseteq P/Log\implies PH=\Sigma_0^P=\Pi_0^P}$$

Is there a circuit result that would imply $$\mathsf{PH=\Sigma_1^P=\Pi_1^P}$$ but still $$\mathsf{PH\neq\Sigma_0^P=\Pi_0^P}$$ is a possibility?

Is there a possible similar collapse result from non-circuit containment?

### CompsciOverflow

#### How can a compiler do without constructing an explicit parse tree? [on hold]

What are the benefits and drawbacks of explicit parse tree construction?

### /r/emacs

#### (delete-selection-mode) doesn't work...

Hi!

I just installed Emacs for Mac OS, and installed the configuration files as per this tutorial: http://www.braveclojure.com/basic-emacs/#1__Installation . But for some reason I can't delete regions, even though the tutorial says it should work. I'm very new to emacs so nothing makes much sense: even when I manually activate (delete-selection-mode) I can't delete regions (ctrl-space, select region, backspace). Am I doing something wrong?

submitted by JacquesDegree

### TheoryOverflow

#### How can a top-down parser detect ungrammaticality an input string [on hold]

What would be the sketch from recovering from the parsing errors .

Will backtracking be used for this?

Thanks

#### Why do top-down parsers have difficulty with ambiguous grammars [on hold]

I know there is a recent enough method to make it possible for top-down parsers to parse an ambiguous grammar ,

But why is it so difficult for a top down parser to parse an ambiguous grammar .

Im very new to the field of compilers so a simple and concise explanation would work well with me .

Thanks

### StackOverflow

#### Ocaml List: Implement append and map functions

I'm currently trying to extend a friend's OCaml program. It's a huge collection of functions needed for some data analysis.. Since I'm not really an OCaml crack I'm currently stuck on a (for me) strange List implementation:

type 'a cell = Nil
| Cons of ('a * 'a llist)
and 'a llist = (unit -> 'a cell);;


I've figured out that this implements some sort of "lazy" list, but I have absolutely no idea how it really works. I need to implement an Append and a Map Function based on the above type. Has anybody got an idea how to do that?

Any help would really be appreciated!

### Planet Theory

#### Girls Who Code (Newton) Visit to Harvard

My friend David Miller is looking for instructors to help out with the Newton Girls who Code club.  Here's an announcement, please connect with him if you're interested.

They visited Harvard last week -- David gave me permission to post his description of the visit.  It seemed like a well-organized event -- thanks to the Harvard Women in Computer Science group for putting it together.

----

Last Friday, the Newton Girls Who Code club was welcomed by the Harvard Women in Computer Science at Harvard's Maxwell-Dworkin Laboratory. The students learned about the joint Harvard-MIT Robot Soccer team from the mechanics tech lead, Harvard junior Kate Donahue. She showed us last years fleet of robots (named after Greek and Roman gods), and described their work on preparing this years fleet (to be named after Harry Potter characters). Kate emphasized the interplay between computer vision, artificial intelligence, mechanical engineering, and distributed systems. Many of the robot parts are 3D printed -- a technology that the Newton GWC students will become more familiar with this fall as we integrate the Newton Library's 3D printer into the GWC activities.

After the robots demonstration, the students took part in a Q+A discussion with Harvard WiCS undergrads Hana Kim, Ann Hwang, and Adesola Sanusi. Our students asked great questions about our hosts' motivation and history with coding, the mechanics of being a college CS major, the role of gender in the CS community, the connections between computer science and other fields, and our hosts' vision for the future of computing. The WiCS undergrads are excellent role models and were enormously encouraging. They pronounced our students more than ready to take Harvard's most popular course, Introduction to Computer Science, and recommended they try out the free, online, EdX version today. It was an exhilarating afternoon!

### /r/compsci

#### Having difficulties with some computer architecture questions.

I have this problem:

A benchmark program makes 12000 memory references. The program accesses cache and main memory. The average memory access time is 28ns while the cache and main access times are 12 ns and 48ns respectively. How many of the benchmark references are in main memory ?

It seems like simultaneous equations could be used ie x+y=12000 and (12x+48y)/12000=28 where x is number of times cache is accessed and y is number of times main memory is accessed. But this method doesn't give whole numbers for x and y. Can anybody help me with this? :)

Furthermore, what is a typical size for a cache block today? I've found conflicting answers online.

submitted by sippd

### Lobsters

#### What Comes After MVC

Hey folks, here’s the talk I gave two days ago on improving OO design in Rails apps, mostly applying taken from functional programming in a way Ruby/Rails devs don’t see as weird and unidiomatic. Rolling this around in my head is why I keep posting so many stories about FP, dynamic languages, OO design, and the intersection of same.

A silly trailer for the talk and signup for ongoing info + slides with speaker notes is here: https://push.cx/2015/railsconf

### CompsciOverflow

#### Finding a loop invariant?

What are some of the efficient ways to find a loop invariant for a certain code or a Hoare triple? For example, I managed to find some invariants for simpler examples, but I'm struggling with one that I found. It's a Hoare triple and goes like:

{ 0<N ∧ (∃k : 0≤k<N : a[k]=0) }

i := 0 ; x := false ;
while i < N do {x := x ∨ (a[i]=0) ; i++}

{ x }


My guess is that the invariant is: i < N, but I feel that I'm missing something. Maybe I could express this as i ≥ 0, because i starts from 0 and is later only greater than it.

### TheoryOverflow

#### What is the relationship between $\mathsf{APX}$ and $\mathsf{MaxSNP}$ classes?

My understanding of these classes is a really fuzzy. The more I am trying to read the more I am getting confused. Can anyone help me understand the relationship between these classes. More precisely, is $\mathsf{MaxSNP} = \mathsf{APX}$ or $\mathsf{MaxSNP} \subset\mathsf{APX}$?

Thanks a lot in advance. :)

### /r/emacs

#### "msw-theme" - what do you do for writing articles?

Hi all,

It's really quite silly, but despite the infinite power of org-mode, I keep going back to Microsoft Word to write documents. It just somehow looks infinitely better than using a fixed-width programming font to write articles.

I've been using the following code snippet to make get a more Word-like org experience. It's been working well, although it's pretty primitive.

;; Toggle Microsoft Word-like writing settings (defun toggle-msw-theme () (interactive) (if (equal custom-enabled-themes '(tango-dark)) (progn (load-theme 'leuven) (set-face-attribute 'default nil :font "Calibri" :height 120 :foreground "#FFFFF")) (progn (load-theme 'tango-dark) (set-face-attribute 'default nil :font "DejaVu Sans Mono" :height 120)))) 

Would anyone like to share what they do in lieu of Word? Thanks for your thoughts!

submitted by defenestre

### StackOverflow

#### Shapeless - turn a case class into another with fields in different order

I'm thinking of doing something similar to Safely copying fields between case classes of different types but with reordered fields, i.e.

case class A(foo: Int, bar: Int)
case class B(bar: Int, foo: Int)


And I'd like to have something to turn a A(3, 4) into a B(4, 3) - shapeless' LabelledGeneric comes to mind, however

LabelledGeneric[B].from(LabelledGeneric[A].to(A(12, 13)))


results in

<console>:15: error: type mismatch;
found   : shapeless.::[shapeless.record.FieldType[shapeless.tag.@@[Symbol,String("foo")],Int],shapeless.::[shapeless.record.FieldType[shapeless.tag.@@[Symbol,String("bar")],Int],shapeless.HNil]]
(which expands to)  shapeless.::[Int with shapeless.record.KeyTag[Symbol with shapeless.tag.Tagged[String("foo")],Int],shapeless.::[Int with shapeless.record.KeyTag[Symbol with shapeless.tag.Tagged[String("bar")],Int],shapeless.HNil]]
required: shapeless.::[shapeless.record.FieldType[shapeless.tag.@@[Symbol,String("bar")],Int],shapeless.::[shapeless.record.FieldType[shapeless.tag.@@[Symbol,String("foo")],Int],shapeless.HNil]]
(which expands to)  shapeless.::[Int with shapeless.record.KeyTag[Symbol with shapeless.tag.Tagged[String("bar")],Int],shapeless.::[Int with shapeless.record.KeyTag[Symbol with shapeless.tag.Tagged[String("foo")],Int],shapeless.HNil]]
LabelledGeneric[B].from(LabelledGeneric[A].to(A(12, 13)))
^


How do I reorder the fields in the record (?) so this can work with a minimum of boilerplate?

### CompsciOverflow

#### Is there a complexity viewpoint of Galois' theorem?

• Galois's theorem effectively says that one cannot express the roots of a polynomial of degree >= 5 using rational functions of coefficients and radicals - can't this be read to be saying that given a polynomial there is no deterministic algorithm to find the roots?

• Now consider a decision question of the form, "Given a real rooted polynomial $p$ and a number k is the third and the fourth highest root of $p$ at least at a gap of k?"

A proof certificate for this decision question will just be the set of roots of this polynomial and that is short certificate and hence it looks like $NP$ BUT isn't Galois theorem saying that there does not exist any deterministic algorithm to find a certificate for this decision question? (and this property if true rules out any algorithm to decide the answer to this question)

So in what complexity class does this decision question lie in?

All NP-complete questions I have seen always have a trivial exponential time algorithm available to solve them. I don't know if this is expected to be a property which should always be true for all NP-complete questions. For this decision question this doesn't seem to be true.

### TheoryOverflow

#### Complexity analysis on a parameterized recurrence relation

In order to analyse the complexity of our algorithm, we try to solve this recurrence:

$T(n)=3T(n-1)-T(n-2)+T(n-k)+3^k$ ; in which $k$ is a parameter to be fixed.

We know that this kind of recurrence means $T(n)=O(\alpha^n)$, where $\alpha$ is the zero root of the equation $f(x)=x^n-3x^{n-1}+x^{n-2}-x^{n-k}-3^k$.

The term $3^k$ expects a small value of $k$, while the rest terms want a big value of $k$, therefore we believe that there should be a best choice of $k$.

So the question is how to choose $k$, $(k=g(n))$ in order to minimize $\alpha$?

Thank you in advance for any idea.

UPDATE 1:

1. The recurrence has been corrected, and yes, $\alpha$ should be between 2 and 3.

2. Please note that $k$ is a dynamic value, which changes during the recursion. So it's better to consider the recurrence as $T(n)=3T(n-1)-T(n-2)+T(n-g(n))+3^{g(n)}$

UPDATE 2:

To somewhat simply the problem, we may consider $g(n)=\beta n$ and try to find the $\beta$ ?

# twenty-years-after

## 1 Emacspeak Was Released Twenty Years Ago Today

The more things change, the more they remain the same.

Emacspeak was released 20 years ago on April 25, 1995 with this announcement. The Emacspeak mailing list itself did not exist in its present form — note that the original announcement talks about a mailing list at DEC CRL. When Greg started the mailing list at Vassar, we seeded the list from some/all of the messages from the archive for the mailing list at DEC.e

### /r/clojure

#### Senior Clojurists: Tell Anything! (Aka: What do you wish you had known earlier)

Seasoned Clojurists: Tell us what you wish somebody had told you earlier. Tell us tips, tricks or things people should focus on.

submitted by MyNameIsFuchs

### StackOverflow

#### Changes to existing routes and views in scala play app not taking effect

If I try to make any changes to existing views or routes in my play app, I'm not able to see those changes but if I copy over my changed view file to a new file and use that in my play controller, I'm able to see those changes. I've tried compiling and running the app through the command line with activator and also through the IDEA ide. I've tried Invalidating my IDE cache and re running the app with no luck.

To make it clearer, I created a new file root.scala.html with the contents of modified index.scala.html view and modified my index controller action to

  def index = Action {
Ok(views.html.root.render())
}


This works(changes made to index.scala.html get reflected).

#### Why does sbt give "can't expand macros compiled by previous versions of Scala" for Def.inputTask macro in Scala 2.11.1?

I'm using Scala 2.11.1 and sbt 0.13.5.

I have an sbt plugin that contains a helper function to create input tasks as follows (the implementation is stripped away as it's irrelevant to the problem):

def register(name: String, description: String): Def.Setting[InputTask[Unit]] = {
println("test")
}
}


This function compiles and works just fine in Scala 2.10.4, however once I switch to 2.11.1 it fails with the following error:

can't expand macros compiled by previous versions of Scala

Is the Def.inputTask macro simply broken in Scala 2.11.1, or am I missing some glaring detail?

Right now the above function is residing in the simplest sbt plugin imaginable. There are no dependencies at all, either.

### /r/emacs

#### Using and navigating frames

Hi, Sorry if this is a repeat or a stupid question.

Currently I'm going back and forth between two frame setups quite often. They're in the form:

1.

 +------------------+ | | | | | | | | | LaTeX | | | | | | | | | +------------------+ 

2.

 +------------------+ | | | | | Python code | - | | | | +------------------+ | | | Python shell | | | +------------------+ 

I've been doing it manually now but it is a bit tedious to go past all of the message and log buffers, then split or unsplit a frame. Is there a way to have organized frames inside emacs? I'd prefer that to having two emacs instances running since Alt-tabbing will make me cycle almost as much as using C-x >.

submitted by elkano1003