Planet Primates

April 27, 2015

Fefe

Bug des Tages: Setzt hier jemand Wordpress ein? Echt? ...

Bug des Tages: Setzt hier jemand Wordpress ein? Echt? Immer noch? Wie oft muss das eigentlich noch platzen?

April 27, 2015 10:01 PM

Old and busted: bad bank.New hotness: bad energieversorger.Uniper ...

Old and busted: bad bank.

New hotness: bad energieversorger.

Uniper soll künftig der Teil von E.ON heißen, in dem die konventionelle Stromerzeugung des Energiekonzerns zusammengefasst wird. Der grüne Rest von E.ON soll umziehen - zurück ins Ruhrgebiet.

April 27, 2015 10:01 PM

CompsciOverflow

Bidding Algorithm - processor allocation - ditributed OS

What does the "price" in a bidding algorithm for processor allocation in distributed OS mean?

by Wesam Adel at April 27, 2015 09:54 PM

StackOverflow

Overriding methods map and flatMap in class extending trait Iterator

As a Scala beginner I'm trying to implement a counter for every item of an Iterator being retrieved and processed in a for expression as well as a counter incremented every time a new iteration over one of the "loops" of the expression (outer loop and nested loops) is started. The requirement is to accomplish this without simply placing a statement like counter = counter + 1 at numerous locations in the for expression. The following listing shows my proposed solution to this problem and I would like to know, why method next implementing Iterator's abstract member gets called (and the corresponding counter is incremented) whereas flatMap and map overriding their pendants defined in trait Iterator (and calling them via super) are not called at all.

object ZebraPuzzle {
  var starts = 0
  var items = 0

  class InstrumentedIter[A](it: Iterator[A]) extends Iterator[A] {
    private val iterator = it

    def hasNext = it.hasNext

    def next() = {
      items = items + 1
      it.next()
    }

    override def flatMap[B](f: (A) ⇒ GenTraversableOnce[B]): Iterator[B] = {
      starts = starts + 1
      super.flatMap(f)
    }

    override def map[B](f: (A) ⇒ B): Iterator[B] = {
      starts = starts + 1
      super.map(f)
    }
  } // inner class InstrumentedIter 

The corresponding for expression looks like this:

  def solve = {
    val first = 1
    val middle = 3
    val houses = List(first, 2, middle, 4, 5)
    for {
      List(r, g, i, y, b) <-  new InstrumentedIter(houses.permutations)
      if ...
      List(eng, span, ukr, jap, nor) <- new InstrumentedIter(houses.permutations)
      if ...
      if ...
      if ...
      List(of, tea, milk, oj, wat) <- new InstrumentedIter(houses.permutations)
      if ...
      ...
    } yield ...
  ...
  }
...
} // standalone singleton object ZebraPuzzle

I would be grateful if someone could give me a hint how to solve the given problem in a better way. But most of all I am interested to know why my solution overriding Iterator's map and flatMap doesn't work like expected by my somewhat limited brain ;-)

Regards

Martin

by mkrimpel at April 27, 2015 09:50 PM

TheoryOverflow

Problems solvable by a turing machine that is guaranteed to halt

Consider a Turing machine that is guaranteed to halt in time $O(f(n))$ for a known subset $S$ of input states, where $f$ is some computable function. What is the language that can be recognized by this Turing machine, for some given function $f$, assuming the input states given are only those that are members of $S$?

by Danielle Ensign at April 27, 2015 09:48 PM

Complexity of algebraic problems

I am looking for a list about complexity of various numerical/algebraic problems. E.g.

Adleman once published a list focused on $P$ and $NP$ but it seems outdated.

Mumford(http://www.dam.brown.edu/people/mumford/alg_geom/papers/1993a-WhatComputeAG-Bayer-DAM.pdf) has a paper on what is computable in Algebraic Geometry without regard to complexity.

Does anyone know the list of (major) discoveries since that list was published?

What are some problems of number theoretic/algebraic flavor whose complexity classes is possibly already known (since list was published), unknown but speculated to be somewhere or unknown but cannot be speculated?

by Turbo at April 27, 2015 09:46 PM

CompsciOverflow

Algorithm to determine Strongly Regular Graph(SRG)

I am looking for an algorithm/formula/theorem to determine whether a graph is Strongly Regular Graph(SRG) or not.

I have an idea-

Let $G = (V,E)$ be a regular graph with $ v$ vertices and degree $k$. G is said to be strongly regular if there are also integers λ and μ such that:

Every two adjacent vertices have $λ$ common neighbours.

Every two non-adjacent vertices have $μ $common neighbours.

A graph of this kind is said to be an $SRG(v, k, λ, μ)$. then -

  1. v and k can be easily determined(counting each row/column), then use $(v-k-1)μ=k(k-1-λ)$ to get an equation of $λ, μ$.
  2. for each vertex of v , above equation must hold $iff$ the graph is Strongly Regular Graph(SRG).

    correct?? are there algo: regarding the question?? any reference/info would help, thanks.

by Jim at April 27, 2015 09:44 PM

/r/compsci

Lobsters

CompsciOverflow

Complexity of computing optimal pure follower-strategy in response to leader's mixed strategy

I'm trying to see whether the following problem is NP hard and if so, how I can reduce an NP complete problem to it.

There is a leader who is defending a set of targets. His pure strategies are subsets of these targets with size at most $k$. His mixed-strategy is a probability distribution over these pure strategies.

There is a follower who wants to attack these targets. His pure strategies are subsets of these targets as well.

The leader catches the follower and receives a payoff of 0 if the follower attacks any target that is covered by the leader. The follower receives a certain payoff if he is able to successfully attack a set of targets (i.e., none of these targets are covered by the leader).

The follower knows the pure-strategies of the leader, as well as the mixed-strategy. However, he does not know what particular pure-strategy the leader will play. His aim is to find the optimal pure-strategy (set of targets) to play that will guarantee him the most payoff.

This is solvable with a simple linear program. But I'm trying to see if it is NP hard and what NP complete problem I can reduce to it. The strategy space for the follower grows exponentially since he has $\sum\limits_i^k\binom{n}{i}$ pure strategies available to him. He has to figure out which one of these give him the best payoff.

For a particular pure-strategy, the follower's expected payoff is the sum of the product of the payoff for playing that particular strategy against a pure leader-strategy and the probability that the leader plays that pure strategy. So it is basically $\sum\limits_{D_i \in \mathcal{D}}p(A, D_i)d_i$, where $D_i$ is a pure leader-strategy, $A$ is a particular pure strategy, and $d_i$ is the probability that the leader plays the $D_i$ strategy. He wants to find the highest expected payoff, and so essentially the maximum out of a total of $|\mathcal{A}| \times |\mathcal{D}|$ options.

I've seen a particular example being reduce to 3-SAT. That particular example had an follower choosing a path through a graph, and the leader could block him by placing resources on edges. In my example I just have a set of targets that the attacker can attack.

by Vivin Paliath at April 27, 2015 09:39 PM

StackOverflow

What is the equivalent of Ansible's delegate_to in Puppet

When using the Ansible provisioner, the delegate_to: ip_address can be used to execute actions on the machine that originally invoked ansible (the host) instead of the guest.

When using Puppet, what would be a similar equivelent?

by amateur barista at April 27, 2015 09:39 PM

Scala IDE and Apache Spark -- different scala library version found in the build path


I have some main object:

import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._

object Main {

  def main(args: Array[String]) {
        val sc = new SparkContext(
              new SparkConf().setMaster("local").setAppName("FakeProjectName")
        )
  }
}


...then I add spark-assembly-1.3.0-hadoop2.4.0.jar to the build path in Eclipse from

Project > Properties... > Java Build Path :

Java Build Path error

...and this warning appears in the Eclipse console:

More than one scala library found in the build path
(C:/Program Files/Eclipse/Indigo 3.7.2/configuration/org.eclipse.osgi/bundles/246/1/.cp/lib/scala-library.jar,
C:/spark/lib/spark-assembly-1.3.0-hadoop2.4.0.jar).
This is not an optimal configuration, try to limit to one Scala library in the build path.
FakeProjectName Unknown Scala Classpath Problem


Then I remove Scala Library [2.10.2] from the build path, and it still works. Except now this warning appears in the Eclipse console:

The version of scala library found in the build path is different from the one provided by scala IDE:
2.10.4. Expected: 2.10.2. Make sure you know what you are doing.
FakeProjectName Unknown Scala Classpath Problem


Is this a non-issue? Either way, how do I fix it?

by Ian Campbell at April 27, 2015 09:38 PM

QuantOverflow

Effective & Maturity Date Modified Following

I am constructing discount curve for tenor 1 month.

First Instrument - PLN_1M_WIBOR has Effective Date on 2015-01-29 (spot). I was wondering what Maturity Date should be? 2015-02-27 or 2015-03-02? I am using modified following convention. According to to this convention I suppose it should be 2015-02-27, but I am not sure.

Second instrument is FRA_0102 dated on today (2015-01-27), so it`s Effective Date should be 2015-02-27?

by Mike M at April 27, 2015 09:38 PM

TheoryOverflow

Dimensionality reduction in machine learning

This is less of a question and more of a "here's my take let me know if you agree" (so I guess it might turn into a big-list?).

Dimensionality reduction refers to a collection of techniques that input data and return a lower-dimensional version, with some distortion. PCA and Johnson-Lindenstrauss are the most common examples.

From an algorithmic perspective, the tradeoff is clear: lower dimensionality yields faster runtimes and reduced storage space, but compromises precision. I call this the dimension-distortion tradeoff.

From a statistical/information-theoretic perspective, the situation seems less clear. It is commonly believed that dimensionality reduction (PCA in particular) has a denoising effect and thus should actually improve the performance. On the other hand, dimensionality reduction does discard information, which might cause the performance to degrade. Thus, one must address the statistical question: is the information I'm discarding noise or potentially useful (and even if useful, might the benefits of lower dimension still outweigh the losses)?

There appear to be very few formal analyses of the statistical benefits of dimensionality reduction. One that I'm aware of is in our ALT'13 paper (with Gottlieb and Krauthgramer). The setting there is fairly general -- metric spaces.

Are there other formal analyses of the statistical benefits of dimensionality reduction? Perhaps other tradeoffs besides those mentioned above?

by Aryeh at April 27, 2015 09:36 PM

StackOverflow

scala nested for comprehension with futures

My case domain classes are as below

case Account(randomId: String, accounts: List[String]) // for each of accounts i need to get AccountProfiles.

case AccountProfiles(actId: String, profiles: List[String], additionalInfo: Map[String, String], ......)

case AccountInfo(id: String, profiles:List[String]) // for each of AccountProfiles I need to construct AccountInfo

my access layer implementation signature to extract above domain classes look like below

getLinked(): Future[Account]
getAccountProfile(actId: String): Future[AccountProfiles]

Can I have a for comprehension to construct Future list of AccountInfo domain object with the help of getLinked and getAccountProfile methods ?

by Laxmikanth Samudrala at April 27, 2015 09:34 PM

How can I set file creation times in ZFS?

I've just got a NAS running ZFS and I'd like to preserve creation times when transferring files into it. Both linux/ext4 (where the data is now) and zfs store creation time or birth time. In the case of zfs it's even reported by the stat command. But I haven't been able to figure out how I can set the creation time of a file so it mirrors the creation time in the original file system. Unlike an ext4->ext4 transfer where I can feed debugfs a script to set the file creation times.

Is there a tool similar to debugfs for ZFS?

by Silvio Levy at April 27, 2015 09:34 PM

CompsciOverflow

Why doesn't parallelism necessarily imply non-determinism?

I'm a student reading a book on threads. And I got when I got to non-deterministic and parallel programs, I got a bit confused. I hope you can help me out.

I understand the difference between concurrency and parallelism. I get that concurrent programs are non-deterministic depending on the precise timing of events. "But parallelism doesn't necessarily imply non-determinism" - as said in the book. Why is that?

Does that imply that it's all dependent on the languages that support parallelism. Which implies that these languages should execute parallel programs in a deterministic manner?

Another question that I have is that the timing of events of concurrent programs depend on what exactly on what exactly? The architecture of the machine?

by macmania314 at April 27, 2015 09:32 PM

Distributed vs parallel computing

I often hear people talking about parallel computing and distributed computing, but I'm under the impression that there is no clear boundary between the 2, and people tend to confuse that pretty easily, while I believe it is very different:

  • Parallel computing is more tightly coupled to multi-threading, or how to make full use of a single CPU.
  • Distributed computing refers to the notion of divide and conquer, executing sub-tasks on different machines and then merging the results.

However, since we stepped into the Big Data era, it seems the distinction is indeed melting, and most systems today use a combination of parallel and distributed computing.

An example I use in my day-to-day job is Hadoop with the Map/Reduce paradigm, a clearly distributed system with workers executing tasks on different machines, but also taking full advantage of each machine with some parallel computing.

I would like to get some advice to understand how exactly to make the distinction in today's world, and if we can still talk about parallel computing or there is no longer a clear distinction. To me it seems distributed computing has grown a lot over the past years, while parallel computing seems to stagnate, which could probably explain why I hear much more talking about distributing computations than parallelizing.

by Charles Menguy at April 27, 2015 09:31 PM

StackOverflow

Scala Option type upper bound don't understand

I'm reading Functional Programming in Scala, and in chapter 04 the authors implement Option on their own. Now, when defining the function getOrElse they use an upper bound to restrict the type of A to a supertype (if a understood correctly)

So, the definition goes:

sealed trait Option[+A] {
   def getOrElse[B >: A](default: => B): B = this match {
     case None => default
     case Some(a) => a
   }
}

So, when we have something like

val a = Some(4)
println(a.getOrElse(None)) => println prints a integer value
val b = None
println(b.getOrElse(Some(3)) => println prints a Option[Integer] value

a has type Option[Int], so A would be type Int. B would be type Nothing. Nothing is a subtype of every other type. That means that Option[Nothing] is a subtype of Option[Int] (because of covariance), right?

But with B >: A we said that B has to be a supertype?! So how can we get an Int back? This is a bit confusing for me...

Anyone care to try and clarify?

by Marin at April 27, 2015 09:20 PM

/r/netsec

StackOverflow

proper use of scala traits and types

New to scala and trying to get the hang of the class system. Here's a simple set up:

sealed trait Shape{
  def sides:Int
}

final case class Square() extends Shape {
  def sides() = 4
}

final case class Triangle() extends Shape {
  def sides() = 3
}

Now, I want to create a function that takes anything of type shape, which we know will have a sides() method implemented, and make use of that method.

def someFunction(a: Shape)={
    val aShape = a()
    aShape.sides()
}

But this hits an error at val aShape = a(), as there's no type a.

I realize that in this example, it's excessive to create someFunction, since sides() can be accessed directly from the objects. But my primary question is in the context of someFunction - I'd like to pass a class to a function, and instantiate an object of that class and then do something with that object. Thanks for your help.

by keegan at April 27, 2015 09:18 PM

Proper use of Scala traits and case objects

Trying to get the hang of Scala classes and traits, here's a simple example. I want to define a class which specifies a variety of operations, that could be implemented in lots of ways. I might start with,

sealed trait Operations{
  def add
  def multiply
}

So for example, I might instantiate this class with an object does that add and multiply very sensibly,

case object CorrectOperations extends Operations{
    def add(a:Double,b:Double)= a+b
    def multiply(a:Double,b:Double)= a*b
}

And also, there could be other ways of defining those operations, such as this obviously wrong way,

case object SillyOperations extends Operations{
    def add(a:Double,b:Double)= a + b - 10
    def multiply(a:Double,b:Double)= a/b
}

I would like to pass such an instance into a function that will execute the operations in a particular way.

 def doOperations(a:Double,b:Double, op:operations) = {
   op.multiply(a,b) - op.add(a,b)
 }

I would like doOperations to take any object of type operations so that I can make use of the add and multiply, whatever they may be.

What do I need to change about doOperations, and what am I misunderstanding here? Thanks

by keegan at April 27, 2015 09:15 PM

Lobsters

QuantOverflow

Why the expected return rate of a stock has nothing to do with its option price?

OK, I admit that this is a frequently asked question. But I couldn't find a satisfying answer after I read the explanations of books, went through the derivations of B-S formula, and searched answers online. My question is that, I can understand the derivation of the B-S formula, but what is the intuition that the expected return rate of a stock has nothing to do with its option price?

Suppose I have two stocks A and B, the price is the same today, both worth 20 dollars. Stock A has a expected return of 0.5 dollars/week, a volatility of 50%; stock B has a expected return of 10 dollars/week, a volatility of 1%. For call options with strike price 40 dollars expiring in 1 month(4 weeks), how can the option price for stock A is greater than that for B, since stock A is expected to worth only 22 dollars while B is worth 60 dollars? Any intuitive explanations?
I can also make examples where stock A has vanishingly expected return and B has infinitely large expected return.

by Allanqunzi at April 27, 2015 09:04 PM

StackOverflow

Wiki xml parser - org.apache.spark.SparkException: Task not serializable

I am newbie to both scala and spark, and trying some of the tutorials, this one is from Advanced Analytics with Spark. The following code is supposed to work:

import com.cloudera.datascience.common.XmlInputFormat
import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.io._
val path = "/home/petr/Downloads/wiki/wiki"
val conf = new Configuration()
conf.set(XmlInputFormat.START_TAG_KEY, "<page>")
conf.set(XmlInputFormat.END_TAG_KEY, "</page>")
val kvs = sc.newAPIHadoopFile(path, classOf[XmlInputFormat],
classOf[LongWritable], classOf[Text], conf)

val rawXmls = kvs.map(p => p._2.toString)

import edu.umd.cloud9.collection.wikipedia.language._
import edu.umd.cloud9.collection.wikipedia._

def wikiXmlToPlainText(xml: String): Option[(String, String)] = {
val page = new EnglishWikipediaPage()
WikipediaPage.readPage(page, xml)
if (page.isEmpty) None
else Some((page.getTitle, page.getContent))
}

val plainText = rawXmls.flatMap(wikiXmlToPlainText)

But it gives

scala> val plainText = rawXmls.flatMap(wikiXmlToPlainText)
org.apache.spark.SparkException: Task not serializable
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:166)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:158)
at org.apache.spark.SparkContext.clean(SparkContext.scala:1622)
at org.apache.spark.rdd.RDD.flatMap(RDD.scala:295)
...

Running Spark v1.3.0 on a local (and I have loaded only about a 21MB of the wiki articles, just to test it).

All of http://stackoverflow.com/search?q=org.apache.spark.SparkException%3A+Task+not+serializable didn't get me any clue...

Thanks.

by Bechyňák Petr at April 27, 2015 08:57 PM

QuantOverflow

(Re) normalisation of random variable in Monte-Carlo simulations

I have a very simple model (CIR) with a very simple discretisation scheme (Euler) and I use it to do Monte-Carlo Simulations. It is working.

Someone insisted that renormalization of my random variables would give better results. I.e. after drawing my normally distributed random variables I should translate them to obtain exactly mean 0 and multiply it to obtain variance 1. I have never heard of this simple technique before.

On a theoretical point of view I am unsure that my new random variables have the normal distribution I wanted. On a practical point of view the change in the result is ridiculous.

Is this technique really working ? Do you have an exemple where this is usefull ? Or a counter exemple where this is very wrong ?

by Were_cat at April 27, 2015 08:54 PM

/r/osdev

Lobsters

What are you working on this week?

This is the weekly thread to discuss what you’ve done recently and are working on this week.

Please be descriptive and don’t hesitate to ask for help, advice or other guidance.

by caius at April 27, 2015 08:49 PM

StackOverflow

convert function from scala to python [on hold]

I have the following function written with scala . I am newbie with python and spark

import org.apache.spark.mllib.linalg._
    def toRDD(m: Matrix): RDD[Vector] = {
      val columns = m.toArray.grouped(m.numRows)
      val rows = columns.toSeq.transpose // Skip this if you want a column-major RDD.
      val vectors = rows.map(row => new DenseVector(row.toArray))
      sc.parallelize(vectors)
    }

How to convert it into python ?

by canada canada at April 27, 2015 08:45 PM

How to roll up into Any and then unroll into the generic type in scala

As you roll up something like instances of Flag[T], you end up rolling into Flag[_] or Flag[Any] like so

val seq: Seq[Flag[_]] = new Flag[Int](5) :+ new Flag[MyFlag](someinstance) :+ new Flag[String]("stringValue")

so inside Flag, I added a

def genericType(implicit m: Manifest[T]): Class[_] = m.runtimeClass

We also have inside the Flag class a

def value: T = { //the value }}

so now as I loop over Flag[_], I need to pass value and classOf[T] into a template function

def someTemplateFunction[F](clazz: Class[F], value: F)

How can I achieve this? and should I be using [_], [Any] or [Nothing]?

I finally got something that compiles but had to use a Java function like so to get there(about to test it all out as I am not sure it will work at all):

public class BindGuiceByType {
    public static void bind(Binder binder, String flagName, Object value, Class clazz) {
        Key key = Key.get(clazz, new FlagImpl(flagName));
        binder.bind(key).toInstance(value);
    }
}

by Dean Hiller at April 27, 2015 08:43 PM

QuantOverflow

Forward Curves and Par Yield Curves

I'm recently reading a research paper on the yield curve by Salomon brothers and in it it states that when the forward curve is above the par yield curve, it is seen as cheaper. If for example, the years 9-12 of the forward rate curve lie above the par yield curve with the forward 12 year rate above the 9 year rate as well, it is recommended to buy the 12 year bond while selling the 9 year bond.

Unfortunately, I am unable to accurately grasp the concept behind this in relation to the par yield curve. Please help! Thank you!

by Timothy Ng at April 27, 2015 08:38 PM

StackOverflow

Scala case class private constructor but public apply method

If I have the following case class with a private constructor and I can not access the apply-method in the companion object.

case class Meter private (m: Int)

val m = Meter(10) // constructor Meter in class Meter cannot be accessed...

Is there a way to use a case class with a private constructor but keep the generated apply-method in the companion public?

I am aware that there is no difference (in my example) between the two options:

val m1 = new Meter(10)
val m2 = Meter(10)

but I want to forbid the first option.

-- edit --

Surprisingly the following works (but is not really what i want):

val x = Meter
val m3 = x(10) // m3  : Meter = Meter(10)

by Erik at April 27, 2015 08:37 PM

Multiple scala libraies causing error in intellij?

I am using intellij 14 with scala 2.11.6 installed using home brew and symlink using

ln -s /usr/local/Cellar/scala/2.11.6/libexec/src /usr/local/Cellar/scala/2.11.6/src
ln -s /usr/local/Cellar/scala/2.11.6/libexec/lib  /usr/local/Cellar/scala/2.11.6/lib
mkdir -p /usr/local/Cellar/scala/2.11.6/doc/scala-devel-docs
ln -s /usr/local/Cellar/scala/2.11.6/share/doc/scala /usr/local/Cellar/scala/2.11.6/doc/scala-devel-docs/api

I tried running a simple hello world but run into the following issue.

Error:scalac: Multiple 'scala-library*.jar' files (scala-library.jar, scala-library.jar, scala-library.jar) in Scala compiler classpath in Scala SDK scala-sdk-2.11.6

Edit:

So I check the compiler class path on global libraries and apparently there are multiple scal-library.jar

file:///usr/local/Cellar/scala/2.11.6/idea/lib/scala-library.jar
file:///usr/local/Cellar/scala/2.11.6/lib/scala-library.jar
file:///usr/local/Cellar/scala/2.11.6/libexec/lib/scala-library.jar

Does anyone know why?

by Kevin at April 27, 2015 08:36 PM

Extracting the complete call graph of a scala project (tough one)

I would like to extract from a given Scala project, the call graph of all methods which are part of the project's own source.

As I understand, the presentation compiler doesn't enable that, and it requires going down all the way down to the actual compiler (or a compiler plugin?).

Can you suggest complete code, that would safely work for most scala projects but those that use the wackiest dynamic language features? for the call graph, I mean a directed (possibly cyclic) graph comprising class/trait + method vertices where an edge A -> B indicates that A may call B.

Calls to/from libraries should be avoided or "marked" as being outside the project's own source.

Thanks!

by matt at April 27, 2015 08:34 PM

TheoryOverflow

Do these set systems imply a partition?

During research, I hit a set theoretic claim that I could neither proof nor disproof.

Let $S_1,S_2,S_3$ be three set systems over the same universe $U$ such that

  1. $S_1,S_2,S_3$ are closed w.r.t. $\cap$ and $\cup$ and

  2. for each pair $u,v \in U$, there are two disjoint sets $X \in S_i, Y \in S_j$, $i,j \in \{1,2,3\}$, $i \neq j$, such that $u \in X$ and $v \in Y$.

Then, there is a partition of $U$ into at most three sets, each from a different set system.

by Oliver Witt at April 27, 2015 08:11 PM

/r/clojure

CompsciOverflow

What advantages does Block Programming Environment have over High Level Programming Language

I need help on my GCSE computing homework and I need to know what advantages block programming environments like app inventor have over high level programming languages like python or java.

by user31103 at April 27, 2015 08:10 PM

AWS

DynamoDB Update – Improved JSON Editing & Key Condition Expressions

Thousands of customers use Amazon DynamoDB to build popular applications for Gaming (Battle Camp), Mobile (The Simpsons Tapped Out), Ad-tech (AdRoll), Internet-of-Things (Earth Networks) and Modern Web applications (SmugMug).

We have made some improvements to DynamoDB in order to make it more powerful and easier to use. Here’s what’s new:

  1. You can now add, edit, and retrieve native JSON documents in the AWS Management Console.
  2. You can now use a friendly key condition expression to filter the data returned from a query by specifying a logical condition that must be met by the hash or hash range keys for the table.

Let’s take a closer look!

Native JSON Editing
As you may know, DynamoDB already has support for storage, display, and editing of JSON documents (see my previous post, Amazon DynamoDB Update – JSON, Expanded Free Tier, Flexible Scaling, Larger Items if this is news to you). You can store entire JSON-formatted documents (each up to 400 KB) as single DynamoDB items. This support is implemented within the AWS SDKs and lets you use DynamoDB as a full-fledged document store (a very common use case).

You already have the ability to add, edit, and display JSON documents in the console in DynamoDB’s internal format. Here’s what this looks like:

Today we are adding support for adding, editing, and display documents in native JSON format. Here’s what the data from the example above looks like in this format:

You can work with the data in DynamoDB format by clicking on DynamoDB JSON. You can enter (or paste) JSON directly when you are creating a new item:

You can also view and edit the same information in structured form:

Key Condition Expressions
You already have the ability to specify a key condition when you call DynamoDB’s Query function. If you do not specify a condition, all of the items that match the given hash key will be returned. If you specify a condition, only items that meet the criteria that it specifies will be returned. For example, you could choose to retrieve all customers in Zip Code 98074 (the hash key) that have a last name that begins with “Ba.”

With today’s release, we are adding support for a new and easier to use expression-style syntax for the key conditions. You can now use the following expression to specify the query that I described in the preceding paragraph:

      zip_code = "98074" and begins_with(last_name, "Ba")

The expression can include Boolean operators (=, <, <=, >, >=), range tests (BETWEEN/AND), and prefix tests (begins_with). You can specify a key condition (the KeyCondition parameter) or a key condition expression (the KeyConditionExpression parameter) on a given call to the Query function, but you cannot specify both. We recommend the use of expressions for new applications. To learn more, read about Key Condition Expressions in the DynamoDB API Reference.

Available Now
These features are available now and you can start using them today!

Jeff;

by Jeff Barr at April 27, 2015 08:10 PM

/r/compsci

Fefe

CompsciOverflow

Turing Machine that semidecides a language all strings of length 4

I was going through some Halting Problem reduction and I found the following question:

Given a TM M, does the language that semi-decides contain all strings of exactly length $4$?

The question later asks to prove whether this is recursive or r.e. Going through some practicing, I came to the conclusion that this is must be r.e., since otherwise such problem would solve the Halting Problem.

So, I attempted this question by showing that $$HP < 4LENGTH$$

My approach is to show that $4LENGTH$ is recursive and then showing the contraddiction, that $HP$ is not recursive, so is $4LENGTH$.

However I can't understand how to construct such machine. Moreover, by looking at some similar proof I see that they suggest to build a machine $M'$ that erases the tape, writes $I$ (input) on the tape and simulate $M$. However, I can't follow such proof.

Can someone describe how to solve such problem or give any direction?

by revisingcomplexity at April 27, 2015 07:59 PM

StackOverflow

How can I get the behavior of GNU's readlink -f on a Mac?

On Linux, the readlink utility accepts an option -f that follows additional links. This doesn't seem to work on Mac and possibly BSD based systems. What would the equivalent be?

Here's some debug information:

$ which readlink; readlink -f
/usr/bin/readlink
readlink: illegal option -f
usage: readlink [-n] [file ...]

by troelskn at April 27, 2015 07:58 PM

QuantOverflow

The Law of One Price in a discrete model

The following question assumes familiarity with the discrete model described in chapter 5 of Steven Roman's "Introduction to the Mathematics of Finance", 2nd edition, Springer 2012. I will not describe the model or the associated notation in this post.

  1. The Law of One Price (p. 132) states that, in the absence of arbitrage opportunity in the market, $$ \mathcal{V}_T(\Phi_1) = \mathcal{V}_T(\Phi_2) \implies \mathcal{V}_k(\Phi_1) = \mathcal{V}_k(\Phi_2) $$ for all times $0 \leq k \leq T$ and for all self-financing trading strategies $\Phi_1$ and $\Phi_2$.

    Unfortunately, no proof is provided in the text (in fact, this law is stated as a definition rather than a theorem). Why does this law hold?

  2. Additionally, it is implied by the text following the statement of the Law of One Price, that, if the market has no arbitrage opportunity, then, given an attainable alternative $X$, if a new asset $a^*$ is introduced into the market and is priced in such a way that its payoff at time $t_T$ is $X$ and its pricing is consistent with the Law of One Price, i.e. for every $k \in \{0, 1, \dots, T\}$, $S_{a^*, k} := \mathcal{V}_k(\Phi)$, where $\Phi$ is any self-financing trading strategy such that $\mathcal{V}_T(\Phi) = X$, then the resulting, extended market will still have no arbitrage opportunity.

    Why is this so?

by Evan Aad at April 27, 2015 07:57 PM

StackOverflow

freebsd compile is so complicated?

I want to add custom syscall to freebsd(school work). I google hundreds of time. there is no right solution for it. my homework is: "Add custom syscall to freebsd kernel and recompile the kernel and use it". finally I find that I should follow instructions in these two pages:

1 : http://www.onlamp.com/pub/a/bsd/2003/10/09/adding_system_calls.html

then

2: https://www.freebsd.org/doc/en/books/handbook/kernelconfig-building.html

will it shows errors in compile time:

<sys/parma.h> no such file or directory
<sys/kern.h> no such file or directory
<sys/syscallargs.h> no such file or directory

I removed these three header include form my file then recompile it. now shows other errors like: MAXCPU undeclered in pcpu.h file.

what I missed? how can I do my school work?

NOTE: I use freebsd8 in vbox

by Naser Hamidi at April 27, 2015 07:57 PM

Executing scala code from a jar, clarification needed

I have a maven project, which produces 1 jar file at the end. Project contains 2 modules, of which one is Java and the other one is Scala.

Scala module uses Java code for some backend things. In my example, Java is actual logic, Scala is business rules.

This is how I am executing my code. Neither way i am trying works

java.lang.ClassNotFoundException:

scala -cp /usr/share/scala/lib/scalatest_2.11-2.2.4.jar org.scalatest.tools.Runner -p '/home/program-201504.22-SNAPSHOT.jar:/home/me/git/program/scala/target/scala-program-201504.22-SNAPSHOT.jar:/home/me/temp/lib/.' -o -fWDF /home/me/git/program/scala/target/surefire-reports/TestSuite.txt -u /home/me/git/program/scala/target/surefire-reports/. -s a.b.engine.driver.MyClass

java.lang.ClassNotFoundException:

scala -cp /usr/share/scala/lib/scalatest_2.11-2.2.4.jar:/home/program-201504.22-SNAPSHOT.jar:/home/me/temp/lib/.:/home/me/git/program/scala/target/scala-program-201504.22-SNAPSHOT.jar org.scalatest.tools.Runner -p '/usr/share/scala/lib/scalatest_2.11-2.2.4.jar:/home/program-201504.22-SNAPSHOT.jar:/home/me/temp/lib/.:/home/me/git/program/scala/target/scala-program-201504.22-SNAPSHOT.jar' -o -fWDF /home/me/git/program/scala/target/surefire-reports/TestSuite.txt -u /home/me/git/program/scala/target/surefire-reports/. -s a.b.engine.driver.MyClass

java.lang.NoClassDefFoundError

scala -cp /usr/share/scala/lib/scalatest_2.11-2.2.4.jar org.scalatest.tools.Runner -R "/home/program-201504.22-SNAPSHOT.jar /home/me/git/program/scala/target/scala-program-201504.22-SNAPSHOT.jar /home/me/temp/lib/." -o -fWDF /home/me/git/program/scala/target/surefire-reports/TestSuite.txt -u /home/me/git/program/scala/target/surefire-reports/. -s a.b.engine.driver.MyClass

Please help me understand what am I doing wrong.

  1. I confirmed that a.b.engine.driver.MyClass exists in /home/me/git/program/scala/target/scala-program-201504.22-SNAPSHOT.jar

by Rajesh Thanavarapu at April 27, 2015 07:54 PM

CompsciOverflow

How to combine items in a list into one long list? [on hold]

python 3.3

Hi.

list = ['apple', 'banana', 'carrot', 'dinosaur', 'elephant']

This is my list, that I want to be in one long list like this with single characters:

list = ["a", "p", "p", "l", "e", "b", "a", "n"-----] etc.

How do I achieve this?

by user31099 at April 27, 2015 07:45 PM

StackOverflow

Intellij: "Error running Scala Console: Cannot Start Process"

New install.

Scala SBT project.

Full message is:

Error running Scala Console: Cannot start process, the working directory C:\Program Files (x86)\JetBrains\IntelliJ IDEA Community Edition 13.1.4\jre\jre\bin does not exist

It would be nice to have the directory default to somewhere sensible in the current project...I think.

by ben rudgers at April 27, 2015 07:44 PM

Passing request context implicitly in an actor system

I would like to propagate a request context implicitly in a system of collaborating actors.

To simplify and present the situation, my system has multiple actors and the messages passed to these actors need to include this RequestContext object.

ActorA receives messages of type MessageA ActorB receives messages of type MessageB

when ActorA needs to send a message to ActorB, as part of the handling of MessageA, it performs business logic, and then constructs a MessageB from results of the logic as well as the RequestContext available in MessageA and then sends it to ActorB

def handle(ma:MessageA) {
 val intermediateResult = businessLogic(ma)
 actorB ! MessageB(intermediateResult, ma.requestContext)
}

We have a slew of messages to be handled, and explicitly passing around the requestContext is cumbersome.

I am trying of creative ways to use Scala's implicits feature to avoid explicitly injecting the RequestContext embedded within incoming message into the outgoing message.

The messages are case classes (and they need to be). I have read about implicits rules but bringing an attribute of an object into current implicit scope appears far-fetched.

This, I am sure should be a common requirement. Any suggestions ?

Thanks.

by vishr at April 27, 2015 07:33 PM

SBT with Maven Central - get rid of Scala version in artifact url

I am trying to use in my Scala project Java library which is on Maven Central. While resolving this dependency, SBT appends Scala version to the repository url which obviously does not exist in such a format. Can I somehow disable appending Scala version for this specific artifact?

by mkorszun at April 27, 2015 07:30 PM

How to count characters of a String?

I am a pretty newbie to Scala! I would like to count the times a char occurs in a string. How do I do this? I started writing something like this but I find the syntax very hard to grasp. Any help?

 var s = "hello"
 var list = s.toList.distinct
 list.foreach(println(s.count(_=='list')))

by lioli at April 27, 2015 07:29 PM

Creating Spark application using wrong Scala version

I am following the instructions here: https://spark.apache.org/docs/latest/quick-start.html to create a simple application that will run on a local standalone Spark build.

In my system I have Scala 2.9.2 and sbt 0.13.7. When I write in my simple.sbt the following:

scalaVersion := "2.9.2"

after I use sbt package, I get the error: sbt.ResolveException: unresolved dependency: org.apache.spark#spark-core_2.9.2;1.3.1: not found

However, when I write in simple.sbt:

scalaVersion := "2.10.4"

sbt runs successfully and the application runs fine on Spark.

How can this happen, since I do not have scala 2.10.4 on my system?

by Cantfindname at April 27, 2015 07:23 PM

Lobsters

StackOverflow

Scala: Parsing json using asOpt[T] return None although I have value

I'm trying to extract value using asOpt[T] as I may not have the key. I get in return None although I have a value in my string.

val jsonString = {"data": {"operation":"+", "value":"1"},"right": {"data":{"operation":"-", "value":"2"}},"left":{"data":{"operation":"-", "value":"4"}}}
buildFromJson(jsonString)

Code:

import play.api.libs.json.Json
...
def buildFromJson(jsonString: String) : TreeNode = {
    val nodeData = json.\("data")
    val nodeValue = nodeData.\("operation").as[String] //This works

    println("left= " + json.\("left").asOpt[String]) //output= None
    println("left= " + json.\("left").as[String]) // throws exception
    println("left= " + json.\("left").toString) //output= {"data":{"operation":"-", "value":"4"}}

What is the best way to parse JSON key that might not exist? As I read it is asOpt, but it doesn't work.

Update - How I overcome the problem:

1) First of all m-z was right, it was indeed object and could not return as String.

2) I couldn't find a good way to extract field that might not be exist (as at the beginning I added 'right' to json only if right sub node was exist). Now I add to the json 'right' field with null value if it doesn't exist.

by KerenSi at April 27, 2015 07:17 PM

Fefe

Mir hat ein Jurist mal die tatsächliche Rechtsgrundlage ...

Mir hat ein Jurist mal die tatsächliche Rechtsgrundlage für Auslandseinsätze der Bundeswehr gemailt. Ich zitiere:
Das Bundesverfassungsgericht hat die Rechtsgrundlage der Bundeswehr in out-of-area Fällen in Art. 24 GG verortet. Die Leitentscheidung des Gerichts, die das erstmals erwähnt hat („erfunden hat“) ist im 90. Band (verfügbar etwa unter: http://www.servat.unibe.ch/dfr/bv090286.html). Die verfassungsrechtliche Idee des Gerichts war: Wenn man sich - so der Inhalt von Art. 24 GG - als Bundesrepublik Deutschland an internationalen Organisationen beteiligen darf, die ihrerseits militärische Operationen durchführen (zB UNO, später auch die NATO), dann ermächtigt das Grundgesetz damit auch implizit zu out-of-area Einsätzen der Bundeswehr.

Ob das zwingend ist, diese implizite Annahme zu unterstellen, kann man bezweifeln. Jedenfalls sagt unser Verfassungsgericht das. Die Bundeswehr sollte also auch Art. 24 GG zitieren. Oder Du machst die „richtige Rechtsgrundlage“ zumindest in Deinem Blog bekannt?

Mach ich doch glatt!

April 27, 2015 07:01 PM

CompsciOverflow

Countability union of all finite and countably infinite sequences over finite alphabet

Is the set of all finite and countably infinite sequences over $\{0,1\}$ countable?

From my analysis, I think it is countable. I think of this as the set of all strings from a finite alphabet $\Sigma=\{0,1\}$, hence $\Sigma^*$. (is this a good assumption?).

I later show that I can count each string in the following order: $0$,$1$ (length $1$), $00$, $01$, $10$, $11$ (length $2$) and so on.

However, I am very confused from the initial requirement "finite and countably infinite sequences". Does my method account for the countably infinite strings?

by revisingcomplexity at April 27, 2015 06:59 PM

StackOverflow

Not all akka stream Sinks receive the emitted data

When running the following akka streaming FlowGraph not all the emitted Chars are received by all Sinks.

package sample.stream

import java.io.{ FileOutputStream, PrintWriter }
import akka.actor.ActorSystem
import akka.stream.ActorFlowMaterializer
import akka.stream.scaladsl.{ Broadcast, FlowGraph, Sink, Source }
import scala.concurrent.forkjoin.ThreadLocalRandom
import scala.util.{ Failure, Success, Try }

object Sample {

  def main(args: Array[String]): Unit = {
    println("start")
    implicit val system = ActorSystem("Sys")
    import system.dispatcher
    implicit val materializer = ActorFlowMaterializer()
    var counter = -1

    val countSource: Source[Char, Unit] = Source(() => Iterator.continually { counter += 1; (counter + 'A').toChar }.take(11))

    var counter1 = 0
    val consoleSink1 = Sink.foreach[Char] { counter =>
      println("sink1:" + counter1 + ":" + counter)
      counter1 += 1
      Thread.sleep(100)
      //Thread.sleep(300)
    }
    var counter2 = 0
    val consoleSink2 = Sink.foreach[Char] { counter =>
      println("sink2:" + counter2 + ":" + counter)
      counter2 += 1
      Thread.sleep(200)
    }

    val materialized = FlowGraph.closed(consoleSink1, consoleSink2)((x1, x2) => x1) { implicit builder =>
      (console1, console2) =>
        import FlowGraph.Implicits._
        val broadcast = builder.add(Broadcast[Char](2))
        countSource ~> broadcast ~> console1
        broadcast ~> console2
    }.run()

    // ensure the output file is closed and the system shutdown upon completion
    materialized.onComplete {
      case Success(_) =>
        system.shutdown()
      case Failure(e) =>
        println(s"Failure: ${e.getMessage}")
        system.shutdown()
    }
    println("waiting the remaining ones")
    //scala.concurrent.Await.ready(materialized, scala.concurrent.duration.DurationInt(100).seconds)
    //system.shutdown()
    println("end")
  }
}

After running the following output is generated

[info] Running sample.stream.Sample
[info] start
[info] waiting the remaining ones
[info] end
[info] sink2:0:A
[info] sink1:0:A
[info] sink1:1:B
[info] sink1:2:C
[info] sink2:1:B
[info] sink1:3:D
[info] sink2:2:C
[info] sink1:4:E
[info] sink1:5:F
[info] sink2:3:D
[info] sink1:6:G
[info] sink1:7:H
[info] sink2:4:E
[info] sink2:5:F
[info] sink1:8:I
[info] sink1:9:J
[info] sink2:6:G
[info] sink2:7:H
[info] sink1:10:K

The second sink doesn't receive the 8th, 9th and 10th values: IJK but still the entire flow is ended.

What should I do to wait for both Sinks to consume all the data? I discovered that if I change the (x1,x2)=>x1 to (x1,x2)=>x2 this will wait. That is the same with sleeping 300ms in the first sink.

by raisercostin at April 27, 2015 06:56 PM

Connect to a server in Scala [duplicate]

This question already has an answer here:

I have an assignment in a discipline from my graduation course where I have to communicate with a server. I am allowed to do that in any language, so I chose Scala. I only receive the port that I am supposed to do that, which is 127.0.0.1:50200.

I would like to know how would I make that connection in Scala. Is it a library or something built in already in the language? I know it is probably really simple, but I have never done something like this.

Ps.: Note that the server is an application that is running on my computer.

by lhahn at April 27, 2015 06:47 PM

Lobsters

StackOverflow

Java error: "Comparison method violates its general contract!"

I have this code:

package org.optimization.geneticAlgorithm;
import org.optimization.geneticAlgorithm.selection.Pair;

public abstract class Chromosome implements Comparable<Chromosome> {
    public abstract double fitness();
    public abstract Pair<Chromosome> crossover(Chromosome parent);
    public abstract void mutation();
    public int compareTo(Chromosome o) {
        int rv = 0;
        if (this.fitness() > o.fitness()) {
            rv = -1;
        } else if (this.fitness() < o.fitness()) {
            rv = 1;
        }
        return rv;
    }
}

And every time I run this code I get this error:

Exception in thread "main" java.lang.IllegalArgumentException: Comparison method violates its general contract!
at java.util.ComparableTimSort.mergeHi(ComparableTimSort.java:835)
at java.util.ComparableTimSort.mergeAt(ComparableTimSort.java:453)
at java.util.ComparableTimSort.mergeCollapse(ComparableTimSort.java:376)
at java.util.ComparableTimSort.sort(ComparableTimSort.java:182)
at java.util.ComparableTimSort.sort(ComparableTimSort.java:146)
at java.util.Arrays.sort(Arrays.java:472)
at java.util.Collections.sort(Collections.java:155)
at org.optimization.geneticAlgorithm.GeneticAlgorithm.nextGeneration(GeneticAlgorithm.java:74)
at org.optimization.geneticAlgorithm.GeneticAlgorithm.execute(GeneticAlgorithm.java:40)
at test.newData.InferenceModel.main(InferenceModel.java:134)

I use OpenJDK7u3 and I return 0 when the objects are equal. Can someone explain this error to me?

by mariolpantunes at April 27, 2015 06:45 PM

QuantOverflow

What is the gross accounting relation of Cobb-Douglas function?

We have Cobb-Douglas function like this $Y=AK^\alpha L^{1-\alpha}$, in one of the book, it deduce like this: enter image description here

How can we get this formula? $$\frac{\Delta Y}Y = \frac{\Delta A}A+\alpha\frac{\Delta K}K+(1-\alpha)\frac{\Delta L}L$$

by ZHI at April 27, 2015 06:38 PM

CompsciOverflow

Is it decidable whether a pushdown automata recognizes a given regular language?

The problem whether two pushdown automata recognize the same language is undecidable. The problem whether a pushdown automata recognizes the empty language is decidable, hence it is also decidable whether it recognizes a given finite language. It is undecidable whether the language accepted by a pushdown automata is regular. But ...

... is it decidable whether a pushdown automata recognizes a given regular language?

In case the answer is no, does the problem becomes decidable if the given regular language has star height $1$?

by Thomas Klimpel at April 27, 2015 06:36 PM

/r/emacs

Org-mode with CUA

I'm using CUA mode so when I select text, I lose any of the C-c bindings like converting items to headings and vice versa. What's the best way for me to change the key bindings in org-mode to accomodate CUA mode?

submitted by grok_life
[link] [5 comments]

April 27, 2015 06:23 PM

StackOverflow

Declare a variable in Scala whose type is same as another variable

Lets say i have a variable

val allF = (Some(1), "some string", 2.99, "another string", 1, Some(30))

Now i want to declare a few more variables of type same as allF i.e. (Some[Int], String, Double, String, Int, Some[Int]). I can do

var a1: (Some[Int], String, Double, String, Int, Some[Int]) = _
var a2: (Some[Int], String, Double, String, Int, Some[Int]) = _
... and so on

or i can do

type T = (Some[Int], String, Double, String, Int, Some[Int])
var a1: T = _
var a2: T = _
.. and do on

Is there some way i can use the variable allF to get its type and declare variables a1, a2, a3, ... like this

val allF = (Some(1), "some string", 2.99, "another string", 1, Some(30))
var a1: typeof(allF) = _
var a2: typeof(allF) = _
...

UPDATE - Also for situations like this

val allF = (Some(1), "some string", 2.99, "another string", 1, Some(30))
 xyz match {
   case y: (???)\\ if y is of the same type as allF
}

by lovesh at April 27, 2015 06:11 PM

Fefe

Wisst ihr, was die Bundeswehr jetzt braucht? Uranmunition! ...

Wisst ihr, was die Bundeswehr jetzt braucht? Uranmunition! Damit wir uns (nein, wirklich!!) einen Panzerkrieg gegen Russland leisten können! Glaubt ihr nicht? Lest selbst!
Die Bundeswehr ist derzeit nicht in der Lage, moderne russische Kampfpanzer wirksam zu bekämpfen. Zwar verfügen die deutschen Streitkräfte mit dem Leopard 2 über einen der besten Kampfpanzer der Welt. Allerdings fehlt es nach Informationen der "Welt am Sonntag" an ausreichend durchschlagskräftiger Munition für dieses Waffensystem.

Die auf Wolframbasis hergestellte Pfeilmunition der Bundeswehr produziert nicht genügend kinetische Energie, um die technologisch anspruchsvolle Panzerung der neuesten russischen Gefechtsfahrzeuge vom Typ T90 und modernisierter T80 zu durchschlagen.

Hey vielleicht können wir uns einfach darauf einigen, dass wir schlicht keinen Russlandfeldzug mehr führen wollen? WTF?!?

Ich meine, wie oft müssen kontinentaleuropäische Mächte eigentlich in Russland einmarschieren, bis wir mal merken, dass das keine gute Idee ist?!

Update: Die Dichte von Wolfram und Uran ist übrigens in diesem Universum sehr nahe beieinander, Wolfram liegt sogar leicht vorne. Wolfram hat daher sogar mehr kinetische Energie. Es kostet aber auch viel mehr als Uran, welches praktisch als Abfall anfällt.

Update: Noch ein Kommentar:

Fakt ist, dass die Kanone des Leopard 2 den Wolfram-Pfeilgeschossen deutlich mehr kinetische Energie mitgibt als andere eingeführte westliche Panzerkanonen, u.A. weil das Rohr ungewöhnlcih lang ist. Bei Verwendung von Uranmunition wäre diese kinetische Energie annähernd identisch.

Ansonsten erklärt Wikipedia die militärischen Vorteile von Uranmunition:

DU-Geschosse sind KE-Penetratoren, die durch hohen Impuls die Panzerung eines Hartziels durchschlagen. Uran eignet sich für diese Einsätze v. a. wegen seiner sehr hohen Dichte, aber auch wegen der Eigenschaft, sich beim Aufschlag so zu verformen, dass eine Spitze erhalten bleibt; daher wird Uranmunition auch als „selbstschärfend“ bezeichnet. Ein zusätzlicher Effekt ist, dass sich beim Aufprall auf ein gepanzertes Ziel heißer Uranstaub bildet, der sich bei Luftkontakt im Inneren spontan entzündet (pyrophorer Effekt). Dadurch kann die mitgeführte Munition oder der Treibstoff entzündet werden, was zu der sogenannten Sekundärexplosion des Zieles führen kann.

Update: Ein anderer Einsender merkt an, dass man zum Angreifen von Panzern lieber Tandemhohlladungen nimmt, und die hat die Bundeswehr für den Leopard 2 am Start.

April 27, 2015 06:01 PM

Übrigens, zum Völkermord an den Armeniern: Laut dieses ...

Übrigens, zum Völkermord an den Armeniern: Laut dieses Buches trägt das Kaiserreich übrigens Mitschuld am Völkermord an den Armeniern, via des damaligen Verbündeten, dem Osmanischen Reich. Genau wie die Türkei Rechtsnachfolger des Osmanischen Reichs ist, ist Deutschland Rechtsnachfolger des Kaiserreichs. Völkermord verjährt nicht.

Oh und wo wir gerade bei Völkermord waren: "Bundesregierung: Deutschland hat keinen Völkermord an Herero und Nama begangen"

Nein, natürlich nicht! Nicht weil es nicht stattgefunden hat, nein, so dreist lügt selbst die Bundesregierung nicht. Sondern weil man das damals nicht nach heutigen Maßstäben bewerten darf!1!!

Die brutale Niederschlagung des Aufstandes der Volksgruppen der Herero und Nama durch deutsche Kolonialtruppen zwischen 1904 und 1908 im damaligen Deutsch-Südwestafrika, dem heutigen Namibia, kann nach Auffassung der Bundesregierung nicht nach den heute geltenden Regeln des humanitären Völkerrechts bewertet und daher auch nicht als Völkermord eingestuft werden.
Das ist die Antwort auf eine Kleine Anfrage der Linken.

Das Gesetz, das das regelt, ist erst von 1955, und basiert auf einer Konvention von 1948, und gilt nicht rückwirkend. Na dann.

Update: Einen habe ich noch: Martin Sonneborn dazu.

Update: Juristen-Kommentar dazu:

"Als Rechtsnachfolge bezeichnet man den Übergang von bestehenden Rechten und Pflichten einer Person auf eine andere („Rechtsnachfolger“)."

Die Bundesrepublik ist aber nicht der Rechtsnachfolger des Deutschen Reichts, sondern mit ihm (teil)identisch:

http://de.wikipedia.org/wiki/Rechtslage_Deutschlands_nach_1945
darin
http://www.servat.unibe.ch/dfr/bv036001.html

Absatz 76:

"Die Bundesrepublik Deutschland ist also nicht "Rechtsnachfolger" des Deutschen Reiches, sondern als Staat identisch mit dem Staat "Deutsches Reich", - in bezug auf seine räumliche Ausdehnung allerdings "teilidentisch", so daß insoweit die Identität keine Ausschließlichkeit beansprucht. Die Bundesrepublik umfaßt also, was ihr Staatsvolk und ihr Staatsgebiet anlangt, nicht das ganze Deutschland, unbeschadet dessen, daß sie ein einheitliches Staatsvolk des Völkerrechtssubjekts "Deutschland" (Deutsches Reich), zu dem die eigene Bevölkerung als untrennbarer Teil gehört, und ein einheitliches Staatsgebiet "Deutschland" (Deutsches Reich), zu dem ihr eigenes Staatsgebiet als ebenfalls nicht abtrennbarer Teil gehört, anerkennt."

April 27, 2015 06:01 PM

Der eine oder andere wird es schon gemerkt haben: Hier ...

Der eine oder andere wird es schon gemerkt haben: Hier geht es um Medienkompetenz. Leider gibt es immer noch Leser, die nicht merken, wenn sie gerade an einer Schulungsgelegenheit vorbei laufen. Zum Beispiel neulich, bei der Frage zu Bundeswehr und Auslandseinsätzen. Da habe ich für die Rechtslage nicht etwa auf die tatsächliche Rechtslage verlinkt, sondern auf die Auslegung der Bundeswehr selbst. Gut, die Paragraphen konnten sie jetzt schlecht falsch zitieren, aber sie konnten sie falsch zusammenfassen. Wie man z.B. darauf kommen kann, das Grundgesetz erlaube Kriegseinsätze in Afghanistan, das erschließt sich nicht. Daher zitiert die Bundeswehr hier den Artikel 35 des Grundgesetzes, wo von Ländern die Rede ist. Gemeint sind natürlich Bundesländer, nicht andere Staaten. Auf solche Taschenspielertricks fällt hoffentlich niemand rein, aber ich habe auch nur einen Kommentar gekriegt, wo es jemand gemerkt hat. Und das war jemand von der Grundrechtepartei. Der zählt nicht.

Die andere Lektion, an der gefühlt so gut wie alle vorbei gelaufen sind, war die Feminismus-Nummer hier. Leute, strengt euch mal ein bisschen an! So geht das nicht weiter hier. Das haben offenbar diverse Leute als "der Fefe hasst Feministen und mag auch keine Moslems" gewertet. WTF? Tatsächlich war damit natürlich gemeint, dass der Islam als Religion von ein paar Krawallbrüdern in den Schmutz gezogen wird, die die Werte des Islam mit Füßen treten und was von Jihad blubbern. Ähnlich sieht es mit dem Christentum aus (auch wenn das mit der Nächstenliebe sich da zu selten auf Flüchtlinge bezieht und daher Heuchelei diagnostiziert werden muss). Und mit dem Feminismus.

Aaaaaaaber. Die Moslems distanzieren sich immer wieder von den Dschihadisten. Die Christen distanzieren sich immer wieder von den Fundi-Spinnern. Nur die Feministen distanzieren sich nicht von ihren Fundi-Spinnern, sondern machen es sich viel lieber in einer schönen Opferrolle der missverstandenen und beschimpften Minderheit bequem, genau wie sich Christen in den USA immer wieder völlig irrational die Rolle der Minderheit zurechtlegen, um sich auch mal verfolgt fühlen zu können.

Natürlich habe ich das absichtlich schön zweideutig formuliert, damit es einfach ist, sich einlullen zu lassen und weiter zu scrollen. Aber wer sich einlullen lassen und weiterscrollen will, dem sage ich hiermit mal ganz deutlich ins Gesicht: Du bist bei mir falsch.

Also, liebe Leser. Strengt euch mal ein bisschen an. Ich will hier Erkenntnisgewinn und Gegenwehr gegen mundfertig vorbereitete Schnuller-Formulierungen sehen!

Auf der anderen Seite will ich wohl einräumen, dass es mich freut, wenn meine plumpen Manipulationsversuche funktionieren. Dass heißt, dass ich das hier nicht umsonst mache, sondern es wirklich Bedarf gibt. Und dass ich es nicht zu schlecht anstelle, sonst würde ja keiner drauf reinfallen.

Update: Noch ein Hinweis des Typen von der Grundrechtepartei: Die sagen nicht Gesetzesgrundlage, was die korrekte Terminologie ist, sondern rechtliche Grundlage. Das ist dann vermutlich sowas wie „Premium“ oder „Joghurt mit Fruchtzubereitung“ im Supermarkt.

Update: Übrigens war auch der Link falsch :) Er ging zu deren Rechtfertigung für Bundeswehreinsatz im Inland, nicht im Ausland. Die Rechtfertigung für Auslandseinsätze der Bundeswehr basiert tatsächlich auf Art 24 GG.

April 27, 2015 06:01 PM

Ach nee! Das Kanzleramt wusste anscheinend schon seit ...

Ach nee! Das Kanzleramt wusste anscheinend schon seit 2008, dass der BND für die NSA Wirtschaftsspionage in Deutschland betreibt! Damals zuständig waren Herr de Maiziere (heute Innenminister), dann Ronald Pofalla (heute Lobbyist bei der Deutschen Bahn).

April 27, 2015 06:01 PM

DataTau

StackOverflow

How do I specify type parameters via a configuration file?

I am building a market simulator using Scala/Akka/Play. I have an Akka actor with two children. The children need to have specific types which I would like to specify as parameters.

Suppose that I have the following class definition...

case class SecuritiesMarket[A <: AuctionMechanismLike, C <: ClearingMechanismLike](instrument: Security) extends Actor
  with ActorLogging {

  val auctionMechanism: ActorRef = context.actorOf(Props[A], "auction-mechanism")

  val clearingMechanism: ActorRef = context.actorOf(Props[C], "clearing-mechanism")

  def receive: Receive = {
    case order: OrderLike => auctionMechanism forward order
    case fill: FillLike => clearingMechanism forward fill
  }

}

Instances of this class can be created as follows...

val stockMarket = SecuritiesMarket[DoubleAuctionMechanism, CCPClearingMechanism](Security("GOOG"))
val derivativesMarket = SecuritiesMarket[BatchAuctionMechanism, BilateralClearingMechanism](Security("SomeDerivative"))

There are many possible combinations of auction mechanism types and clearing mechanism types that I might use when creating SecuritiesMarket instance for a particular model/simulation.

Can I specify the type parameters that I wish to use in a given simulation in the application.conf file?

by davidrpugh at April 27, 2015 05:58 PM

Install scalatest in Scala IDE for Eclipse

I have installed Eclipse Luna. Then I installed Scala IDE via Help -> Install new software and adding software site with link from here. Then I installed sbt 0.13 and sbteclipse using this mini-guide and created eclipse project. Then I installed (kindda) scalatest by adding it to my build.sbt. Now it looks like this:

val scalaTest = "org.scalatest" % "scalatest_2.11" % "2.2.4" % "test"

lazy val commonSettings = Seq(
  scalaVersion := "2.11.6"
)

lazy val root = (project in file(".")).
  settings(commonSettings: _*).
  settings(
    libraryDependencies += scalaTest
  )

Then I created a test from this example. The file called FirstSpec.scala is located in testProject/src/test/scala-2.11/testProject/. So here is a problem: eclipse seems to not see scalaTest. The second line with import org.scalatest._ is underlined red with error description object scalatest is not a member of package org. And, following this guide, I don't see the option Run As -> ScalaTest - Suite when choosing my test class. At the same time everything goes good and fine when I start sbt session in my test project and type test command. The tests launches and passes.

So my questions are:

  • why eclipse doesn't see the scalatest if I put it in build.sbt's libraryDependencies? What's the point of libraryDependencies then?
  • Why sbt test runs the tests without a problem? If sbt sees scalatest, why eclipse can't?

by Zapadlo at April 27, 2015 05:55 PM

How to prove size of a list in Leon?

I am trying to prove that the size (number of elements) in a list is non-negative, but Leon fails to prove it---it just times out. Is Leon really not capable of proving this property, or am I using it wrongly? My starting point is a function I read in the paper "An Overview of the Leon Verification System".

import leon.lang._
import leon.annotation._

object ListSize {
  sealed abstract class List
  case class Cons(head: Int, tail: List) extends List
  case object Nil extends List

  def size(l: List) : Int = (l match {
    case Nil => 0
    case Cons(_, t) => 1 + size(t)
  }) ensuring(_ >= 0)
}

by vkuncak at April 27, 2015 05:55 PM

QuantOverflow

What is the most efficient way to periodically download all new 10-K filings from SEC's EDGAR?

I found this website which uses a perl script to download all the filings. It states: "There are 200K+ 10-K (and equivalent) filings, which will take considerable harddisk space and time to download. The SEC prefers that bulk-download is done during 'quiet time', i.e., outside the regular trading hours."

Is there a better way than deluging the SEC's servers with requests to get this information?

by jsc123 at April 27, 2015 05:54 PM

/r/compsci

Audio tutorials/books about algorithms and data structures

Everyday I waste 2 hours because of commute. I can't read because of the vibration. I was wondering if you know any rosources that I can listen to learn algorithms and data structures?

submitted by n1b2
[link] [comment]

April 27, 2015 05:52 PM

Lobsters

StackOverflow

explode json array in schema rdd

I have a json like :

{"name":"Yin", "address":[{"city":"Columbus","state":"Ohio"},{"city":"Columbus","state":"Ohio"}]} {"name":"Michael", "address":[{"city":null, "state":"California"},{"city":null, "state":"California"}]

here address is an array and if i use sqlContext.jsonfile i get the data in schema rdd as follows :

[Yin , [(Columbus , Ohio) , (Columbus , Ohio)] [Micheal , [(null, California) , (null, California)]

I want to explode the array present and want the data in the following format in schema rdd :

[Yin, Columbus, Ohio] [Yin, Columbus, Ohio] [Micheal, null, California] [Micheal, null, California]

I am using spark SQL

by user3775045 at April 27, 2015 05:36 PM

/r/compsci

Late IEEE Conference paper notification?

The notification should have came two days ago. What should I make of this matter?

submitted by 12398235123
[link] [2 comments]

April 27, 2015 05:35 PM

StackOverflow

How do you provide domain credentials to ansible's mount module?

I've figured out how to use the shell module to create a mount on the network using the following command:

- name: mount image folder share
  shell: "mount -t cifs -o domain=MY_DOMAIN,username=USER,password=PASSWORD  //network_path/folder /local_path/folder
  sudo_user: root
  args:
    executable: /bin/bash

But it seems like it's better practice to use Ansible's mount module to do the same thing.

Specifically, I'm confused about going about providing the options for domain=MY_DOMAIN,username=USER,password=PASSWORD. I see there is a flag for opts, but I'm not quite sure how this would look.

by Adam Kalnas at April 27, 2015 05:27 PM

How do i subtract an RDD[(Key,Object)] from another one?

I want to change the format of my data, from RDD(Label:String,(ID:String,Data:Array[Double])) to an RDD Object with the label,id and data as component. So when i print my rdd consecutively two times, the references of objects changes :

class Data_Object(private val id:String, private var vector:Vector) extends Serializable {
var label = ""
...
}

First print 
(1,ms3.Data_Object@35062c11)
(2,ms3.Data_Object@25789aa9)

Second print
(2,ms3.Data_Object@6bf5d886)
(1,ms3.Data_Object@a4eb65)

I think that explain why the subtract method doesn't work. So can i use subtract with object as value or do i return to my classic model ?

Thank you for your precious help.

by KyBe at April 27, 2015 05:24 PM

Is using Try[Unit] the propper way?

Recently I started to mess around with scala. I came across the concept of try/success/failure. Now I am wondering how to use this concept when a method has the retuntype Unit. Is using Try[Unit] the correct way? Maybe I am to influenced from my Java-background, but is it a good idea to force the caller to deal with the problem?

by Karda at April 27, 2015 05:21 PM

How to reference the name of a val in a Object class in scaladoc?

Is there a way that I can reference the name of a val in a Object class using scaladoc, similar to

{@value #STATIC_FIELD} 

in javadoc.

by pegausbupt at April 27, 2015 05:21 PM

Planet Theory

Advice on running a theory day

Last semester I ran a THEORY DAY AT UMCP. Below I have ADVICE for people running theory days. Some I did, some I didn't do but wish I did, and some are just questions you need to ask yourself.

1) Picking the day- I had two external speakers (Avrim Blum and Sanjeev Arrora) so I was able to ask THEM what day was good for THEM. Another way is to pick the DAY first and then asking for speakers.

2) Number of external speakers- We had two, and the rest were internal. The external speakers had hour-long talks, the internal had 20-minute talks. This worked well; however, one can have more or even all external speakers.

3) Whoever is paying for it should be told of it towards the  beginning of the process.

4) Lunch- catered or out? I recommend catered if you can afford it  since good time for people to all talk. See next point.

5) If its catered you need a head count so you need people to register. The number you get may be off- you may want to ask when they register if they want lunch. Then add ten percent.

6) Tell the guest speakers what time is good for them to arrive before they make plans so you can coordinate their dinner the previous night.

7) If have the money and energy do name tags ahead of time. If not then just have some sticky tags and a magic marker.

8) Guest speakers- getting them FROM Amtrak or Airport to dinner/hotel --- give them a personal lift (they may be new to the city and a bit lost). Getting them from the event back TO the airport or amtrak, can call airline limo and taxi. (though if can give a ride, that's of course good.)

9) Pick a day early and stick with it. NO day is perfect, so if someone can't make it, or there is a faculty meeting that day, then don't worry about it.

10) Have website, speakers, all set at least a month ahead of time. Advertise on theory-local email lists, blogs you have access to, and analogs of theory-local for other places (I did NY, NJ, PA). Also email people  to spread the word.

11) Advertise to ugrads. Students are the future!

12) If you are the organizer you might not want to give a talk since you'll be too busy doing other things.

13) Well established regular theory days (e.g., NY theory day) can ignore some of the above as they already have things running pretty well.


by GASARCH (noreply@blogger.com) at April 27, 2015 05:20 PM

Fefe

Kommentar zu dem S-400-Verkauf:Es gab vor Kurzem einen ...

Kommentar zu dem S-400-Verkauf:
  1. Es gab vor Kurzem einen englischsprachigen Artikel darüber, dass die Russen ihre Zuversicht in diesen Angelegenheiten geäußert haben. Die Aussage war, dass die Chinesen Jahre zum Nachbauen brauchen, und bis dahin haben die Russen die nächste Generation am Start. Wer reverse engineered hinkt hinterher.
  2. Wenn die Chinesen Systeme verwenden, die denen der Russen entsprechen, dann kennen die Russen zumindest die chinesischen Fähigkeiten gut. Es hat sich in mehreren Konflikten gezeigt, dass die genaue Kenntnis der Flugabwehrsysteme diese bereits großteils entwertet (z.B. Roland auf Falklands, die Briten wussten alles Wesentliche darüber von den Franzosen / Russische Systeme im Bekaa Tal, die von Israelis mit gründlciher Vorbereitung total besiegt wurden).
  3. Die Russen haben's einfach nötig, finanziell.
  4. Es gibt seit langer Zeit schon keine neuen Anzeichen für "monkey model" Exportversionen russischer Rüstungsgüter mehr. Die gehen wohl korrekt davon aus, dass der Westen inzwischen an technische Geheimnisse auch durch Spionage in Russland selbst herankommen kann. "Exportversionen" sind teils sogar aufwändisger als von Russland selbst angeschaffte Versionen, oder (Fall Indien) nach Kundenwunsch maßgeschneidert.

Die S-400 Version mit der größten Reichweite ist übrigens gegen Radarflugzeuge relevant. E-3 AWACS kann moderne Kampfflugzeuge kaum auf 200-300 km aufspüren und verfolgen, bei Präsenz von S-400 wird AWACS also soweit nach hinten gedrückt, dass es zum rein defensiven System würde. Angreifende Kampfflugzeuge müssten sich dann gegenseitig mit ihren vorausgerichteten Radars sichern (was sehr viele Maschinen erfordert). Eine Antwort hierauf wäre der berühmte "Stealth" Ansatz, doch der funktioniert bei sehr langwelligen Radarfrequenzen nicht gut, und S-400 nutzt das aus.

Der Westen hat (soweit öffentlich bekannt) nicht einmal eine wirklich gute Antwort auf das S-400 Vorgängersystem S-300 in seinen neueren Versionen. Sobald irgendwo S-300 / S-400 rumsteht ist das typisch amerkanische rumrempeln mit "cruise missile diplomacy" offenbar nicht mehr möglich.

April 27, 2015 05:01 PM

Seid ihr schon mal vom Support nach eurem Plattenkrypto-Passwort ...

Seid ihr schon mal vom Support nach eurem Plattenkrypto-Passwort gefragt worden?

Ich auch nicht. Ich baue aber auch die Platten eh immer aus, bevor ich ein Gerät irgendwo einschicke.

Hier ist mal ein Erfahrungsbericht, der per Mail eingegangen ist:

Wie wäre es, wenn man einfach mal direkt nach Administratorpasswörtern fragt? Ist sowieso viel einfacher. Ich wurde gestern von einem Mitarbeiter bei einem Apple Certified Service Provider nach meinem Passwort gefragt, damit der Techniker meine Platte entschlüsseln kann. Ohne die Daten auf der Festplatte, so sagte man mir, würde Apple nicht überprüfen können, ob die Hardware in Ordnung ist (???). Ja klar, das ist ganz normal. Das wird alles durchs Internet zu Apple geschickt. Die machen da nichts schlimmes mit. Das ist alles zu meiner eigenen Sicherheit. Der Grund dafür war ein loses Teil im Gehäuse. Also der Techniker musste quasi ne Schraube festziehen. Diese Spezialschrauben sind anscheinend so komplex, dass man definitiv uneingeschränkten Zugriff auf alles braucht und auch schauen muss, ob die Daten auf der Platte so sind, wie Apple das möchte.

April 27, 2015 05:01 PM

Neulich in New York: Schwedische Cops im Urlaub in ...

Neulich in New York: Schwedische Cops im Urlaub in der New Yorker U-Bahn zeigen den Amis, wie man eine Schlägerei ohne Schusswaffen beendet. Die New Yorker sind jetzt ein bisschen verstört, dass die Situation so ganz ohne tote Schwarze deeskaliert werden konnte.

April 27, 2015 05:01 PM

Eines der für mich faszinierensten Themen ist, wie ...

Eines der für mich faszinierensten Themen ist, wie Menschen mit Schuld umgehen. Nehmt nur mal diesen Fall hier. Eine australische Gesundheits-Bloggerin, die in ihrem Blog beschrieben hat, wie sie mit gesunder Diät ihren Krebs besiegt hat. Die hat einen Buch-Deal gekriegt, mehrere 100k$ Spenden, Apple hat sich nach Cupertino eingeladen (sie hat auch eine populäre App aus ihrem Diät-Krams gemacht), ... und dann kam raus, dass sie eher an Unehrlichkeit als an Krebs leitet. Alles erstunken und erlogen. Die Kohle weg. Der Buchdeal ist dann natürlich auch geplatzt, die App ist nicht mehr im App Store. Gute Gelegenheit für ein bisschen Selbstreflektion, würde man denken. Und was hat sie zu sagen?
"I just think [speaking out] was the responsible thing to do. Above anything, I would like people to say, 'Okay, she’s human. She’s obviously had a big life. She’s respectfully come to the table and said what she’s needed to say, and now it’s time for her to grow and heal.'"
Mit anderen Worten: Sie sieht sich hier noch als Heldin der Geschichte. Das war ja bloß menschlich, und schaut her, ich bin hergekommen und habe gestanden! Jetzt muss ich erwachsener werden und heilen.

Ja, heilen! Sagt die Frau, die mit falschen Krebsansagen hunderttausende von Dollars an Spenden … ja was sagt man da, hinterzogen hat?

Die Polizei hat sich übrigens entschieden, nicht zu ermitteln.

Man stelle sich mal vor, Bernie Madoff stellt sich vor Gericht hin und sagt: schaut her, ich habe gestanden, jetzt ist es Zeit für mich, erwachsen zu werden und zu heilen.

WTF?

April 27, 2015 05:01 PM

Kennt ihr das Problem? Beerdigung und kaum jemand kommt? ...

Kennt ihr das Problem? Beerdigung und kaum jemand kommt? Das ist besonders im chinesischen Kulturkreis unschön, denn dort gilt es als ein glückliches Omen für das Leben nach dem Tod, wenn viele Trauergäste kommen.

Was also tun? Na klar! Stripper kommen lassen!

Die chinesische Regierung ist unzufrieden und geht jetzt dagegen vor. Das Phänomen scheint aber schon älter zu sein und es auch bis Taiwan geschafft zu haben.

April 27, 2015 05:01 PM

Lobsters

QuantOverflow

Time series analysis on illiquid price data?

Say for example I have the following company in some specialized industry:

A - Company that is about to be listed in Exchange 1, i.e., no price history

B - Company that produce similar products as Company A, listed on Exchange 1 as well, however B has very thin volume and price could stay the same for weeks.

C - Similar to company A but listed on Exchange 2, again, thin trading volume

D - Similar to company B but listed on Exchange 2, also thin trading volume

For companies B, C and D, I have their historical EOD price for the past two years.

Exchange 1 and 2 are listed in different continents and there is very little correlation between the two, also, there is no index for this industrial sector. (However the price between company C&D and A&B should be correlated). Also, we can not assume the price time series is non-stationary as the products those company produce could be seasonal in nature.

I would like to figure out the "correct" market price for Company A before it is listed, based on the information above. And my results so far shows that each of the price time series that I have has a different ARIMA model.

Therefore, my question is how can I tackle these price data to start my analysis? Bearing in mind that those are all the data I have and can get.

by AZhu at April 27, 2015 04:57 PM

StackOverflow

How do i navigate through a nested array-map that contains vectors in clojure

I have the following array-map created in Clojure.

{:node 7, :children [{:node 8, :children []} {:node 6, :children []} {:node 23, :children {}} {:node 43, :children []}]}

How do i go about adding elements into this, running the following code

(def tree (assoc-in tree [:node] 12))

gives me

{:node 12, :children [{:node 8, :children []} {:node 6, :children []} {:node 10, :children {}} {:node 13, :children []} {:node 28, :children []}]}`

and running

(def tree (assoc-in tree [:node :children] 12))

gives me the following error message. how do i add elements into the children sections on the array-map

Exception in thread "main" java.lang.ClassCastException: java.lang.Long cannot be cast to clojure.lang.Associative,

by Conor at April 27, 2015 04:54 PM

CompsciOverflow

Graph build for static analysis

I'm new to world of static analysis and am trying to build a new analysis for llvm compiler. Though, I feel that I have some theoretical gaps on the field and find it difficult to proceed. Plus I cannot find any useful practical tutorial of how to build an analysis on the web. (I'd really appreciate if you had any to pass on me).

I'm on the phase of building the graph. But I deal with some questions that I don't have any sure answers:

  1. Do I need really the symbol table or can I skip it and add the nodes straight on the graph? I developed a symbol table to label every variable on its scope. Could it be possible to skip it and add the types straight on the graph?
  2. Speaking about the graph, doesn't it depend on the nature of analysis? Does a simple control flow graph satisfies every analysis? Isn't it different when we have a control-flow analysis versus a data flow analysis?
  3. Is there any useful material of how to build an analysis on top of LLVM or generally one? (Or any material for building a graph for it)

Thanks in advance :)

by emma at April 27, 2015 04:54 PM

TheoryOverflow

Is there any special property the resulting graph G' has?

Undirected graph $G$ can be partitioned into several vertex blocks, each vertex pair $(u,v)$ has an edge if "$u$" and "$v$" are in the different blocks; no edge, otherwise. Question is: Suppose we have several such graphs $G_1,\ldots,G_n$, where a vertex $v_i$ may be in more than one $G_j$. If we combine graphs $G_1,\ldots,G_n$ by taking the union of their edge sets to get a graph $G'$, as shown in the example below, what is the name of the resulting graph? Does $G'$ have any special properties for the vertex cover problem or some other classical problems?

by Micheal at April 27, 2015 04:53 PM

CompsciOverflow

What do we know about NP ∩ co-NP and its relation to NPI?

A TA dropped by today to inquire some things about NP and co-NP. We arrived at a point where I was stumped, too: what does a Venn diagram of P, NPI, NP, and co-NP look like assuming P ≠ NP (the other case is boring)?

There seem to be four basic options.

  1. NP ∩ co-NP = P

    In particular, co-NPI ∩ NPI = ∅

  2. NP ∩ co-NP = P ∪ NPI

    In particular, co-NPI = NPI?

  3. NP ∩ co-NP ⊃ P ∪ NPI ∪ co-NPI

    A follow-up question in this case is how NPC and co-NPC are related; is there an overlap?

  4. Something else, that is in particular some problems from NPI are in co-NP and others are not.

Do we know which is right, or at least which can not be true?

The complexity zoo entries for NPI and NP ∩ co-NP do not inspire much hope that anything is known, but I'm not really fluent enough in complexity theory to comprehend all the other classes (and their impact on this question) floating around there.

by Raphael at April 27, 2015 04:35 PM

QuantOverflow

How to estimate CVA by valuing a CDS of the counterparty?

I'm trying to estimate CVA of one of my derivatives by valuing a credit default swap (CDS) of my counterparty. However, I don't know how to set up the CDS deal (notional amount, maturity, etc.).

Thanks!

by Carlos F. at April 27, 2015 04:35 PM

/r/netsec

StackOverflow

heroku permission denied on netty start

I have deployed a play scala/java web app on heroku that uses an embeded netty server. The Procfile command is web: target/universal/stage/bin/mmbu-timesheets -Dhttp.port=80

Taken from heroku logs:

2015-04-27T13:49:27.160198+00:00 heroku[web.1]: Starting process with command `target/universal/stage/bin/mmbu-timesheets -Dhttp.port=80`
2015-04-27T13:49:29.321552+00:00 app[web.1]: Picked up JAVA_TOOL_OPTIONS: -Xmx384m -Xss512k -Dfile.encoding=UTF-8 -Djava.rmi.server.useCodebaseOnly=true
2015-04-27T13:49:29.898379+00:00 app[web.1]: Play server process ID is 3
2015-04-27T13:49:32.549840+00:00 app[web.1]: [[37minfo[0m] application - mongodb connection ds031701.mongolab.com:31701 db->heroku_app36286493
2015-04-27T13:49:33.374954+00:00 app[web.1]: [[37minfo[0m] play - Application started (Prod)
2015-04-27T13:49:33.628067+00:00 app[web.1]: Oops, cannot start the server.
2015-04-27T13:49:33.630568+00:00 app[web.1]:    at play.core.server.NettyServer$$anonfun$8.apply(NettyServer.scala:89)
2015-04-27T13:49:33.630523+00:00 app[web.1]:    at play.core.server.NettyServer$$anonfun$8.apply(NettyServer.scala:92)
2015-04-27T13:49:33.630859+00:00 app[web.1]:    at play.core.server.NettyServer$.createServer(NettyServer.scala:206)
2015-04-27T13:49:33.630358+00:00 app[web.1]: org.jboss.netty.channel.ChannelException: Failed to bind to: /0.0.0.0:80
2015-04-27T13:49:33.630483+00:00 app[web.1]:    at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
2015-04-27T13:49:33.630606+00:00 app[web.1]:    at scala.Option.map(Option.scala:146)
2015-04-27T13:49:33.630802+00:00 app[web.1]:    at play.core.server.NettyServer.<init>(NettyServer.scala:89)
2015-04-27T13:49:33.630898+00:00 app[web.1]:    at play.core.server.NettyServer$$anonfun$main$3.apply(NettyServer.scala:243)
2015-04-27T13:49:33.630941+00:00 app[web.1]:    at play.core.server.NettyServer$$anonfun$main$3.apply(NettyServer.scala:238)
2015-04-27T13:49:33.631096+00:00 app[web.1]:    at play.core.server.NettyServer$.main(NettyServer.scala:238)
2015-04-27T13:49:33.630975+00:00 app[web.1]:    at scala.Option.map(Option.scala:146)
2015-04-27T13:49:33.631150+00:00 app[web.1]:    at play.core.server.NettyServer.main(NettyServer.scala)
2015-04-27T13:49:33.633017+00:00 app[web.1]: Caused by: java.net.SocketException: Permission denied
2015-04-27T13:49:33.633065+00:00 app[web.1]:    at sun.nio.ch.Net.bind0(Native Method)
2015-04-27T13:49:33.633136+00:00 app[web.1]:    at sun.nio.ch.Net.bind(Net.java:437)
2015-04-27T13:49:33.633179+00:00 app[web.1]:    at sun.nio.ch.Net.bind(Net.java:429)
2015-04-27T13:49:33.633235+00:00 app[web.1]:    at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
2015-04-27T13:49:33.633396+00:00 app[web.1]:    at org.jboss.netty.channel.socket.nio.NioServerBoss$RegisterTask.run(NioServerBoss.java:193)
2015-04-27T13:49:33.633444+00:00 app[web.1]:    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:372)
2015-04-27T13:49:33.633550+00:00 app[web.1]:    at org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42)
2015-04-27T13:49:33.633600+00:00 app[web.1]:    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
2015-04-27T13:49:33.633643+00:00 app[web.1]:    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
2015-04-27T13:49:33.633305+00:00 app[web.1]:    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
2015-04-27T13:49:33.633525+00:00 app[web.1]:    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:296)
2015-04-27T13:49:33.633703+00:00 app[web.1]:    at java.lang.Thread.run(Thread.java:745)
2015-04-27T13:49:34.926567+00:00 heroku[web.1]: Process exited with status 255
2015-04-27T13:49:34.945535+00:00 heroku[web.1]: State changed from starting to crashed

I get "Permission denied" exception. Is there a way to start the netty server in heroku?

by Jonathan at April 27, 2015 04:34 PM

QuantOverflow

forecasting trading costs with end of day data

I am trying to create a model that forecasts trading costs (using end of day data, so no intra day data). My trading cost (also called Implemented Shortfall (IS) is defined as such for a single stock,

IS = (vwap - open) / open

for the market as a whole,

IS = abs(IS_single_stock - IS_market_median)

Variables that I am looking at include a companies market cap, the daily spread, vwap, volume & a liquidity measure called liq_m.

Doing a simple linear regression of each variable against IS produced very low r-squares, below 0.1. Combining the variables did very little to to improve the results. The residual plots appear to have some pattern, the one below is similar for most of the variables, this is mcap vs IS residuals. residual plot, mcap vs IS

The normal probability plot also highlights that the residuals are not linear & have a left skew.

normal probability plot of residuals

In the literal I have read on implemented shortfall all the models are non linear models so this is not unexpected.

I am unsure though of how to proceed next i.e. how to select an appropriate non linear model for testing? The end goal is to have a model that allows me to forecast the cost of trading a certain company.

Below are two more plots. One is the daily plot of mcaps over time - a mean is used to calculate the mcap of the 100 companies used in the sample. Beneath that is the Implemented Shortfall again a mean is used in the plot.

mcap over time enter image description here

by mHelpMe at April 27, 2015 04:29 PM

StackOverflow

StackOverflowError for coin change in Scala?

I'm writing a recursive function for the Coin (change) problem in Scala.

My implementation breaks with StackOverflowError and I can't figure out why it happens.

Exception in thread "main" java.lang.StackOverflowError
    at scala.collection.immutable.$colon$colon.tail(List.scala:358)
    at scala.collection.immutable.$colon$colon.tail(List.scala:356)
    at recfun.Main$.recurs$1(Main.scala:58) // repeat this line over and over

this is my call:

  println(countChange(20, List(1,5,10)))

this is my definition:

def countChange(money: Int, coins: List[Int]): Int =  {

  def recurs(money: Int, coins: List[Int], combos: Int): Int = 
  {    
      if (coins.isEmpty)
          combos
      else if (money==0)
          combos + 1
      else
          recurs(money,coins.tail,combos+1) + recurs(money-coins.head,coins,combos+1)

  }
  recurs(money, coins, 0)
} 

Edit: I just added the else if statement in the mix:

else if(money<0)
    combos

it got rid of the error but my output is 1500 something :( what is wrong with my logic?

by J L at April 27, 2015 04:14 PM

/r/compsci

StackOverflow

Constructing TypeTags of higher-kinded types

Given a simple parametrized type like class LK[A], I can write

// or simpler def tagLK[A: TypeTag] = typeTag[LK[A]]
def tagLK[A](implicit tA: TypeTag[A]) = typeTag[LK[A]]

tagLK[Int] == typeTag[LK[Int]] // true

Now I'd like to write an analogue for class HK[F[_], A]:

def tagHK[F[_], A](implicit ???) = typeTag[HK[F, A]] 
// or some other implementation?

tagHK[Option, Int] == typeTag[HK[Option, Int]]

Is this possible? I've tried

def tagHK[F[_], A](implicit tF: TypeTag[F[_]], tA: TypeTag[A]) = typeTag[HK[F, A]]

def tagHK[F[_], A](implicit tF: TypeTag[F], tA: TypeTag[A]) = typeTag[HK[F, A]]

but neither works for the obvious reasons (in the first case F[_] is the existential type instead of the higher-kinded one, in the second TypeTag[F] doesn't compile).

I suspect the answer is "it's impossible", but would be very happy if it isn't.

by Alexey Romanov at April 27, 2015 04:12 PM

Clojure tree from list of edges

Using Clojure I currently have a list of edges in the format {:a 13, :b 29, :cost 2, :children {}}what would be the best way to create a tree from this list?

by Conor at April 27, 2015 04:09 PM

Why does IDEA report "Error:scalac: error while loading Object, Missing dependency 'object scala in compiler mirror'" building scala breeze?

The breeze project builds fine from command line sbt:

sbt package
...
info] Done packaging.
[info] Packaging /shared/breeze/viz/target/scala-2.11/breeze-viz_2.11-0.11-SNAPSHOT.jar ...
[info] Done packaging.
[success] Total time: 238 s, completed Jan 27, 2015 9:40:03 AM

However , the following error comes up repeatedly when doing Build|Rebuild Project in IntelliJ IDEA 14:

Error:scalac: error while loading Object, Missing dependency 'object scala in compiler mirror', required by /Library/Java/JavaVirtualMachines/jdk1.7.0_25.jdk/Contents/Home/jre/lib/rt.jar(java/lang/Object.class)

Here is the full stacktrace:

Error:scalac: Error: scala.tools.nsc.typechecker.Namers$Namer.enterExistingSym(Lscala/reflect/internal/Symbols$Symbol;)Lscala/tools/nsc/typechecker/Contexts$Context;
java.lang.NoSuchMethodError: scala.tools.nsc.typechecker.Namers$Namer.enterExistingSym(Lscala/reflect/internal/Symbols$Symbol;)Lscala/tools/nsc/typechecker/Contexts$Context;
    at org.scalamacros.paradise.typechecker.Namers$Namer$class.enterSym(Namers.scala:41)
    at org.scalamacros.paradise.typechecker.Namers$$anon$3.enterSym(Namers.scala:13)
    at org.scalamacros.paradise.typechecker.AnalyzerPlugins$MacroPlugin$.pluginsEnterSym(AnalyzerPlugins.scala:35)
    at scala.tools.nsc.typechecker.AnalyzerPlugins$$anon$13.custom(AnalyzerPlugins.scala:429)
    at scala.tools.nsc.typechecker.AnalyzerPlugins$$anonfun$2.apply(AnalyzerPlugins.scala:371)
    at scala.tools.nsc.typechecker.AnalyzerPlugins$$anonfun$2.apply(AnalyzerPlugins.scala:371)
    at scala.collection.immutable.List.map(List.scala:273)
    at scala.tools.nsc.typechecker.AnalyzerPlugins$class.invoke(AnalyzerPlugins.scala:371)
    at scala.tools.nsc.typechecker.AnalyzerPlugins$class.pluginsEnterSym(AnalyzerPlugins.scala:423)
    at scala.tools.nsc.Global$$anon$1.pluginsEnterSym(Global.scala:463)
    at scala.tools.nsc.typechecker.Namers$Namer.enterSym(Namers.scala:274)
    at scala.tools.nsc.typechecker.Namers$Namer$$anonfun$enterSyms$1.apply(Namers.scala:500)
    at scala.tools.nsc.typechecker.Namers$Namer$$anonfun$enterSyms$1.apply(Namers.scala:499)
    at scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:124)
    at scala.collection.immutable.List.foldLeft(List.scala:84)
    at scala.tools.nsc.typechecker.Namers$Namer.enterSyms(Namers.scala:499)
    at scala.tools.nsc.typechecker.Namers$Namer.templateSig(Namers.scala:925)
    at scala.tools.nsc.typechecker.Namers$Namer.moduleSig(Namers.scala:989)
    at scala.tools.nsc.typechecker.Namers$Namer.getSig$1(Namers.scala:1526)
    at scala.tools.nsc.typechecker.Namers$Namer.typeSig(Namers.scala:1541)
    at scala.tools.nsc.typechecker.Namers$Namer$$anonfun$monoTypeCompleter$1$$anonfun$apply$1.apply$mcV$sp(Namers.scala:781)
    at scala.tools.nsc.typechecker.Namers$Namer$$anonfun$monoTypeCompleter$1$$anonfun$apply$1.apply(Namers.scala:780)
    at scala.tools.nsc.typechecker.Namers$Namer$$anonfun$monoTypeCompleter$1$$anonfun$apply$1.apply(Namers.scala:780)
    at scala.tools.nsc.typechecker.Namers$Namer.scala$tools$nsc$typechecker$Namers$Namer$$logAndValidate(Namers.scala:1568)
    at scala.tools.nsc.typechecker.Namers$Namer$$anonfun$monoTypeCompleter$1.apply(Namers.scala:780)
    at scala.tools.nsc.typechecker.Namers$Namer$$anonfun$monoTypeCompleter$1.apply(Namers.scala:772)
    at scala.tools.nsc.typechecker.Namers$$anon$1.completeImpl(Namers.scala:1684)
    at scala.tools.nsc.typechecker.Namers$LockingTypeCompleter$class.complete(Namers.scala:1692)
    at scala.tools.nsc.typechecker.Namers$$anon$1.complete(Namers.scala:1682)
    at scala.reflect.internal.Symbols$Symbol.info(Symbols.scala:1483)
    at scala.reflect.internal.SymbolTable.openPackageModule(SymbolTable.scala:286)
    at scala.tools.nsc.typechecker.Analyzer$packageObjects$$anon$2$$anon$4.traverse(Analyzer.scala:63)
    at scala.tools.nsc.typechecker.Analyzer$packageObjects$$anon$2$$anon$4.traverse(Analyzer.scala:59)
    at scala.reflect.api.Trees$Traverser$$anonfun$traverseStats$1$$anonfun$apply$1.apply$mcV$sp(Trees.scala:2498)
    at scala.reflect.api.Trees$Traverser.atOwner(Trees.scala:2507)
    at scala.reflect.api.Trees$Traverser.traverseStats(Trees.scala:2497)
    at scala.reflect.internal.Trees$class.itraverse(Trees.scala:1326)
    at scala.reflect.internal.SymbolTable.itraverse(SymbolTable.scala:16)
    at scala.reflect.internal.SymbolTable.itraverse(SymbolTable.scala:16)
    at scala.reflect.api.Trees$Traverser.traverse(Trees.scala:2475)
    at scala.tools.nsc.typechecker.Analyzer$packageObjects$$anon$2$$anon$4.traverse(Analyzer.scala:66)
    at scala.tools.nsc.typechecker.Analyzer$packageObjects$$anon$2$$anon$4.traverse(Analyzer.scala:59)
    at scala.reflect.api.Trees$Traverser$$anonfun$traverseStats$1$$anonfun$apply$1.apply$mcV$sp(Trees.scala:2498)
    at scala.reflect.api.Trees$Traverser.atOwner(Trees.scala:2507)
    at scala.reflect.api.Trees$Traverser.traverseStats(Trees.scala:2497)
    at scala.reflect.internal.Trees$class.itraverse(Trees.scala:1326)
    at scala.reflect.internal.SymbolTable.itraverse(SymbolTable.scala:16)
    at scala.reflect.internal.SymbolTable.itraverse(SymbolTable.scala:16)
    at scala.reflect.api.Trees$Traverser.traverse(Trees.scala:2475)
    at scala.tools.nsc.typechecker.Analyzer$packageObjects$$anon$2$$anon$4.traverse(Analyzer.scala:66)
    at scala.tools.nsc.typechecker.Analyzer$packageObjects$$anon$2$$anon$4.traverse(Analyzer.scala:59)
    at scala.reflect.api.Trees$Traverser.apply(Trees.scala:2513)
    at scala.tools.nsc.typechecker.Analyzer$packageObjects$$anon$2.apply(Analyzer.scala:71)
    at scala.tools.nsc.Global$GlobalPhase$$anonfun$applyPhase$1.apply$mcV$sp(Global.scala:441)
    at scala.tools.nsc.Global$GlobalPhase.withCurrentUnit(Global.scala:432)
    at scala.tools.nsc.Global$GlobalPhase.applyPhase(Global.scala:441)
    at scala.tools.nsc.Global$GlobalPhase$$anonfun$run$1.apply(Global.scala:399)
    at scala.tools.nsc.Global$GlobalPhase$$anonfun$run$1.apply(Global.scala:399)
    at scala.collection.Iterator$class.foreach(Iterator.scala:750)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1202)
    at scala.tools.nsc.Global$GlobalPhase.run(Global.scala:399)
    at scala.tools.nsc.Global$Run.compileUnitsInternal(Global.scala:1500)
    at scala.tools.nsc.Global$Run.compileUnits(Global.scala:1487)
    at scala.tools.nsc.Global$Run.compileSources(Global.scala:1482)
    at scala.tools.nsc.Global$Run.compile(Global.scala:1580)
    at xsbt.CachedCompiler0.run(CompilerInterface.scala:126)
    at xsbt.CachedCompiler0.run(CompilerInterface.scala:102)
    at xsbt.CompilerInterface.run(CompilerInterface.scala:27)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at sbt.compiler.AnalyzingCompiler.call(AnalyzingCompiler.scala:102)
    at sbt.compiler.AnalyzingCompiler.compile(AnalyzingCompiler.scala:48)
    at sbt.compiler.AnalyzingCompiler.compile(AnalyzingCompiler.scala:41)
    at org.jetbrains.jps.incremental.scala.local.IdeaIncrementalCompiler.compile(IdeaIncrementalCompiler.scala:29)
    at org.jetbrains.jps.incremental.scala.local.LocalServer.compile(LocalServer.scala:26)
    at org.jetbrains.jps.incremental.scala.remote.Main$.make(Main.scala:65)
    at org.jetbrains.jps.incremental.scala.remote.Main$.nailMain(Main.scala:23)
    at org.jetbrains.jps.incremental.scala.remote.Main.nailMain(Main.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at com.martiansoftware.nailgun.NGSession.run(NGSession.java:319)

by javadba at April 27, 2015 04:08 PM

Json object (de)serialization. Typed languages with class hierarchy

I encountered a quite common problem with JSONs in web client-server application.

Context: scala (could be any typed language with inheritance), typescript+angularjs, json representation in NoSql postgresql, same json sent to web client.

How can I add type to json representation to get features such as:

  1. describes generics like List
  2. enables easy usage of inheritance in javascript/typescript world
  3. can be deserialized line by line (like SAX) by json deserializer to get fast transformation with minimum memory used

Adding attribute to object like {myList: (...) , (...), type: ?? } interferes with point 3 due to no guarantee of attributes order.

Adding type as attribute name List#Integer#: {myList: (...) , (...)} makes the code ugly on client side due to additional wrapper/prefix everywhere.

How to solve this problem? Maybe somebody knows of Scala json library that already supports types?

Many libraries just assume that you know what type you are loading...

by Waldemar Wosiński at April 27, 2015 04:07 PM

scala filter by type

I have read TypeTag related article, but I am unable to realize filter a collection by elements type.

Example:

trait A
class B extends A
class C extends A

val v = Vector(new B,new C)
v filter ( _.isInstanceOf[B] )

The code above works fine. However I want to extract filter out of v. E.g.

def filter[T,T2](data:Traversable[T2]) = (data filter (  _.isInstanceOf[T])).asInstanceOf[Traversable[T]]

//Then filter v by
filter[B,A](v)

In this case I get warning abstract type T is unchecked since it is eliminated by erasure. I tried to use TypeTag, but it seems not easy to get Type on runtime.

Is there any elegant solution to realize function filter? Any solution via scala macro is also acceptable.

by worldterminator at April 27, 2015 04:07 PM

Functional style of Java 8's Optional.ifPresent and if-not-Present?

in java 8 , I want to do something to an optional object if it is present , and do another thing if it is not present.

if (opt.isPresent())
  System.out.println("found");
else
  System.out.println("Not found");

But I think it is not so 'function style'

Optional has an 'ifPresent' method , but unable to chain a 'orElse' method. So I cannot write :

opt.ifPresent( x -> System.out.println("found " + x))
   .orElse( System.out.println("NOT FOUND"));

Is there any other way ?

=============================================

Thanks @assylias , but I don't think Optional.map() works for the following case :

opt.map( o -> {
  System.out.println("while opt is present...");
  o.setProperty(xxx);
  dao.update(o);
  return null;
}).orElseGet( () -> {
  System.out.println("create new obj");
  dao.save(new obj);
  return null;
});

In this case , when opt is present , I update its property and save to db; while it is not available , I create a new obj and save to db.

Note in the two lambdas , I have to return null.

But when opt is truly present , both lambdas will be executed. object will be updated , and a new object will be saved to db . This is because the return null in the first lambda. And orElseGet() will continue to execute.

by smallufo at April 27, 2015 04:06 PM

High Scalability

How can we Build Better Complex Systems? Containers, Microservices, and Continuous Delivery.

We must be able to create better complex software systems. That’s that message from Mary Poppendieck in a wonderful far ranging talk she gave at the Craft Conference: New New Software Development Game: Containers, Micro Services.

The driving insight is complexity grows nonlinearly with size. The type of system doesn’t really matter, but we know software size will continue to grow so software complexity will continue to grow even faster.

What can we do about it? The running themes are lowering friction and limiting risk:

  • Lower friction. This allows change to happen faster. Methods: dump the centralizing database; adopt microservices; use containers; better organize teams.

  • Limit risk. Risk is inherent in complex systems. Methods: PACT testing; continuous delivery.

Some key points:

  • When does software really grow? When smart people can do their own thing without worrying about their impact on others. This argues for building federated systems that ensure isolation, which argues for using microservices and containers.

  • Microservices usually grow successfully from monoliths. In creating a monolith developers learn how to properly partition a system.

  • Continuous delivery both lowers friction and lowers risk. In a complex system if you want stability, if you want security, if you want reliability, if you want safety then you must have lots of little deployments. 

  • Every member of a team is aware of everything. That's what makes a winning team. Good situational awareness.

The highlight of the talk for me was the section on the amazing design of the Swedish Gripen Fighter Jet. Talks on microservices tend to be highly abstract. The fun of software is in the building. Talk about parts can be so nebulous. With the Gripen the federated design of the jet as a System of Systems becomes glaringly concrete and real. If you can replace your guns, radar system, and virtually any other component without impacting the rest of the system, that’s something! Mary really brings this part of the talk home. Don’t miss it.

It’s a very rich and nuanced talk, there’s a lot history and context given, so I can’t capture all the details, watching the video is well worth the effort. Having said that, here’s my gloss on the talk...

Hardware Scales by Abstraction and Miniaturization

by Todd Hoff at April 27, 2015 03:56 PM

StackOverflow

Slick 3.0-RC3 fails with java.util.concurrent.RejectedExecutionException

I'm trying to get familiar with Slick 3.0 and Futures (using Scala 2.11.6). I use simple code based on Slick's Multi-DB Cake Pattern example. Why does the following code terminate with an exception and how to fix it?

import scala.concurrent.Await
import scala.concurrent.duration._
import slick.jdbc.JdbcBackend.Database
import scala.concurrent.ExecutionContext.Implicits.global

class Dispatcher(db: Database, dal: DAL) {
  import dal.driver.api._

  def init() = {
    db.run(dal.create)
    try db.run(dal.stuffTable += Stuff(23,"hi"))
    finally db.close

    val x = {
      try db.run(dal.stuffTable.filter(_.serial === 23).result)
      finally db.close
    }
    // This crashes:
    val result = Await.result(x, 2 seconds)
  }
}

Execution fails with:

java.util.concurrent.RejectedExecutionException: Task slick.backend.DatabaseComponent$DatabaseDef$$anon$2@5c73f637 rejected from java.util.concurrent.ThreadPoolExecutor@4129c44c[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 2]
    at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048)
    at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
    at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372)
    at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136)
    at slick.backend.DatabaseComponent$DatabaseDef$class.runSynchronousDatabaseAction(DatabaseComponent.scala:224)
    at slick.jdbc.JdbcBackend$DatabaseDef.runSynchronousDatabaseAction(JdbcBackend.scala:38)
    at slick.backend.DatabaseComponent$DatabaseDef$class.runInContext(DatabaseComponent.scala:201)
    at slick.jdbc.JdbcBackend$DatabaseDef.runInContext(JdbcBackend.scala:38)
    at slick.backend.DatabaseComponent$DatabaseDef$class.runInternal(DatabaseComponent.scala:75)
    at slick.jdbc.JdbcBackend$DatabaseDef.runInternal(JdbcBackend.scala:38)
    at slick.backend.DatabaseComponent$DatabaseDef$class.run(DatabaseComponent.scala:72)
    at slick.jdbc.JdbcBackend$DatabaseDef.run(JdbcBackend.scala:38)
    at Dispatcher.init(Dispatcher.scala:15)
    at SlickDemo$.main(SlickDemo.scala:16)
    at SlickDemo.main(SlickDemo.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)

by user8117 at April 27, 2015 03:54 PM

TheoryOverflow

Is unary factoring easier than binary factoring? [on hold]

I've been studying idea in complexity for a little while now, but I realized that I don't actually know what factoring is a difficult problem- naively it looked like it was a linear algorithm. (Simply running through all the numbers less than a given number, to see if they are factors.) This isn't linear, however, because the inputs are encoded in binary. So then is factoring in unary an 'easy' problem?

I've often struggled to understand how the mere fact that something is encoded in binary seems to give way to its computational difficulty- this is also the case with Knapsack, and the analogous Unary Knapsack, correct?

by Anthony at April 27, 2015 03:51 PM

StackOverflow

Capturing comma separated list as list through regex in scala

I have input in the following format:

"DataType: FieldName1, Fieldname2,FieldName3" 

Where you can have 1 or more field names.

So for example:

User: Name, Address
Person: Age, Address,DOB

I'm trying to capture the DataType in a string and the fields in an array using scala group capture, this is what I have till now:

val dataTypeAndFieldsRegex = """(.+):(.*(,.*)?)""".r

"Person: Age, Address, DOB" match {
  case dataTypeAndFieldsRegex(dataType, fields, _*) => {
    println("dataType: " + dataType)
    println("fields: " + fields)
  }

The problem is that fields here is a string. How can I capture the fields as an array?

by Micangello at April 27, 2015 03:44 PM

/r/netsec

CompsciOverflow

Can we build a nondeterministic decider PDA using two PDAs accepting a language and its complement?

When talking about turing machines, it can be easily shown that starting from two machines accepting $L$ and its complement $L^c$, one can build a machine which can fully decide if a word is inside $L$ or not.

But what about PDAs? starting from two different PDAs, one accepting $L$ and one accepting $L^c$ can we build another PDA, which accepts $L$, and only crashes or halts in non-final states (rejects) when $w\notin L$?

by Ali.S at April 27, 2015 03:40 PM

StackOverflow

Ansible: how to continue execution on failed task after fixing error in playbook?

When writing and debugging Ansible playbooks, typical workflow is as follows:

  1. ansible-playbook ./main.yaml
  2. Playbook fails on some task
  3. Fix this task and repeat line 1, waiting for all previous tasks to execute again. Which takes a lot of time

Ideally, i'd like to resume execution on failed task, having inventory and all facts collected by previous tasks. Is it even possible? How to make playbook writing/debugging faster?

by Sergey Alaev at April 27, 2015 03:37 PM

QuantOverflow

Are CME security id's unique and constant over time?

For any given day, CME security IDs are unique - a number will always refer to a single product.

Are they unique over time as well? That is, might a new security have a security id that used to be used by an expired one?

And are they constant? That is, does a given security keep the same security id over its lifetime?

by Cookie at April 27, 2015 03:35 PM

StackOverflow

idiomatic lazy atoms in clojure

I am playing a bit with atoms in clojure. I have an atom pointing at a lazy-seq. In another bit of code I want to update the value of the atom to the result of doing next on the sequence, but given that both swap! and reset! return the updated value execution never ends. I figured out that I could always wrap the call to swap!, reset! in a do statement and then return nil, but I am wondering how idiomatic this is or whether there is an alternative solution to do it.

Doesn't terminate:

(def x (atom (range)))
(swap! x next)

Terminates

(def x (atom (range)))
(do (swap! x next) nil)
(first @x) ;1
(do (swap! x next) nil)
(first @x) ;2

by emanjavacas at April 27, 2015 03:30 PM

Lobsters

CompsciOverflow

Determining which states in a transition system satisfy a specific CTL formula

Trying to answer the following question:

enter image description here

However, my answer is that only one of these states satisfy the TS (which is for sure wrong since the next part of this question asks to remove states that don't satisfy the formula and compute the new TS).

Reasoning follows:

1 - does not satisfy the formula since if you go to 4, EXISTS NEXT c is violated

2 - does not satisfy the formula since if you go to 4, EXISTS NEXT c is violated

3 - does not satisfy the formula since if you go to 1, EXISTS NEXT c is violated

4 - satisfies the formula since all paths satisfy EXISTS NEXT c

5 - does not satisfy the formula since if you go to 6, EXISTS NEXT c is violated

6 - does not satisfy the formula since if you go to 4, EXISTS NEXT c is violated

Can anyone see where I have gone wrong with my reasoning?

Something else I'm not sure about is for example if we take 4, it is satisfied since all paths lead to other states that (together) satisfy the equation. Do we need to include these 'other states' in the satisfaction set?

Really grateful for any help.

by eyes enberg at April 27, 2015 03:11 PM

Lobsters

/r/clojure

Lobsters

QuantOverflow

Equity Chart - design and granularity

I am looking to build a web based Equity chart to display performance of FX trading strategies.

I would like to hear opinions and advice on a few areas that I am unsure about.

Granularity

Equity can - and typically does - change every tick. Should I therefore save equity every tick? If I do I am likely to be saving a lot of data! And then the display of this data will also be a challenge - as there would be a lot of noise.

If I am to save a snapshot every moment in time, what would be a recommend timeframe? Every minute?

Optimizing for download

As the amount of data in the equity chart could be quite large, what are some recommend approaches to optimize for download? Would it be advisable to somehow smooth the equity curve and download just a vector line rather than downloading a csv/json with many thousands of datapoints?

Thanks for any feedback - its really appreciated.

by Magick at April 27, 2015 02:35 PM

AWS

AWS Week in Review – April 20, 2015

Let’s take a quick look at what happened in AWS-land last week:

Monday, April 20
Tuesday, April 21
Wednesday, April 22
Thursday, April 23
Friday, April 24
Sunday, April 26

Upcoming Events

Help Wanted

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

Jeff;

by Jeff Barr at April 27, 2015 02:35 PM

Christian Neukirchen

27apr2015

by Christian Neukirchen (chneukirchen@gmail.com) at April 27, 2015 02:32 PM

Lambda the Ultimate Forum

What makes LtU more or less enjoyable?

What makes LtU more or less enjoyable?

These last few months I have been a bit frustrated by my own relation to LtU; I wondered on a few occasions whether I should stop visiting the website frequently, and I eventually sort of decided not to. It is not easy to voice exactly what the problem is, even less easy to know what a solution could be.

I would say the observed symptom of the problem is simple: the proportion of LtU's discussion that I feel strongly interested in is decreasing. What about you?

I think decrease in interest for LtU wouldn't be a frustrating problem if there was another place to supersede it. Unfortunately, I know of no such place: the alternative that is developping right now is a balkanization in a multitude of places, many of which are closed gardens (eg. Quora) I don't wish to help grow prosperous.

(It's of course to be expected that not all LtU discussions interest everyone, as the topic is quite broad and people have different tastes about different subdomains.)

Why am I less interested in LtU discussions? I think there has been at times a better balance between technical discussion around articles (articles mostly following the standards of academic presentation) and less-focused discussion, possibly more radical but less precise viewpoints.

I don't think there are more less-focused / off-original-topic discussion than before, or too much of them, but rather that there not enough of the more structured technical comments. In particular, I mean no criticism of the current LtU members or discussions, which bring many interesting point of views -- there might be things to improve in this area, but I don't think that is where the real gains are. I would be more interested in attracting more "research discussions" but I don't know how to do it.

On the positive sides, here are three examples of interactions that I personally enjoyed a lot recently, and would by themselves justify my continued LtU activity this year:

  • Tom Primožič linked the draft version of Andreas Rossberg "amazing" 1ML paper; without this link, I probably wouldn't have learned about this exciting work for a few months.
  • Sean McDirmid posted article versions of his work on type inference (and previously, Glitch), that helped make more precise interesting discussions that had been going on and off on LtU for a long time.
  • Robert Atkey saw an on-the-side remark inside the LtU submission on the very interesting work on incremental computation, and gave it enough thought to produce an amazing blog post -- that I'm sure will bear further fruits.

(That's of course not the only things I liked on LtU recently. There are many things I come to know through LtU that I wouldn't otherwise learn about, typically on approaches to programming languages that are closer to social sciences (user psychology and experimental studies, sociology of adoption, etc.)

What were your own "value moments" on LtU lately?

April 27, 2015 02:30 PM

StackOverflow

Scala Annotation Inheritance

In Scala, I have an annotation and a base trait with the annotation, but extending that class doesn't inherit the annotation:

scala> import scala.annotation.StaticAnnotation
import scala.annotation.StaticAnnotation

scala> case class AnnotationClass() extends StaticAnnotation
defined class AnnotationClass

scala> @AnnotationClass trait BaseTrait
defined trait BaseTrait

scala> class InheritingClass extends BaseTrait
defined class InheritingClass

scala> import scala.reflect.runtime.universe._
import scala.reflect.runtime.universe._

scala> typeOf[BaseTrait].typeSymbol.asClass.annotations.size
res1: Int = 1

scala> typeOf[InheritingClass].typeSymbol.asClass.annotations.size
res0: Int = 0

Is there a way to get the subclass to inherit the annotation of the parent?

by MrEnzyme at April 27, 2015 02:28 PM

How to achieve following transformation using Spark 1.3.1 DataFrame API?

Just started with spark 1.3.1 and not able to figure out the best way to achieve the following transformation using the available APIs

I have a DataFrame as follows:

ID|TYPE|COUNT

1|A|12
1|B|10
2|A|7
2|B|9

Which needs to be transformed as:

ID|A_COUNT|B_COUNT

1|12|10
2|7|9

Thanks in advance !

by Debajyoti Roy at April 27, 2015 02:26 PM

TheoryOverflow

Is the Chomsky-hierarchy outdated?

The Chomsky(–Schützenberger) hierarchy is used in textbooks of theoretical computer science, but it obviously only covers a very small fraction of formal languages (REG, CFL, CSL, RE) compared to the full Complexity Zoo Diagram. Does the hierarchy play any role in current research anymore? I found only little references to Chomsky here at cstheory.stackexchange, and in Complexity Zoo the names Chomsky and Schützenberger are not mentioned at all.

Is current research more focused on other means of description but formal grammars? I was looking for practical methods to describe formal languages with different expressiveness, and stumbled upon growing context sensitive language (GCSL) and visibly pushdown languages (VPL), which both lie between the classic Chomsky languages. Shouldn't the Chomsky hierarchy be updated to include them? Or is there no use of selecting a specific hierarchy from the full set of complexity classes? I tried to select only those languages that can be fit in gaps of the Chomsky hierarchy, as far as I understand:

REG (=Chomsky 3) ⊊ VPL ⊊ DCFL ⊊ CFL (=Chomsky 2) ⊊ GCSL ⊊ CSL (=Chomsky 1) ⊊ R ⊊ RE

I still don't get where "mildly context-sensitive languages" and "indexed languages" fit in (somewhere between CFL and CSL) although there seems to be of practical relevance for natural language processing (but maybe anything of practical relevance is less interesting in theoretical research ;-). In addition you could mention GCSL ⊊ P ⊂ NP ⊂ PSPACE and CSL ⊊ PSPACE ⊊ R to show the relation to the famous classes P and NP.

I found on GCSL and VPL:

I'd also be happy if you know any more recent textbook on formal grammars that also deal with VPL, DCLF, GCSL and indexed grammars, preferable with pointers to practical applications.

by Jakob at April 27, 2015 02:20 PM

StackOverflow

Storing a LatLon class as GeoJSON using MongoDB / Morphia (Scala or Java)

I'm writing an application that needs to make use of geo-referenced data, and I'd like to use MongoDB + Morphia. The application is in Scala, but if a portion needs to be Java, that's ok (yay compatibility!)

I have a class to represent the Latitude and Longitude of events:

class LatLon
{
  @BeanProperty
  var latDegrees : Double
  @BeanProperty
  var lonDegrees : Double
}

It's not a very exciting class, but it is useful in this context.

Now, I have an event that I record at a location:

class ObservedEvent
{
  @BeanProperty
  var observation : String = _

  @Beanproperty
  var location : LatLon = _
}

Now, I have a ton of observed events and I want to store them in MongoDB with Morphia. The 'location' should be stored as a GeoJson Point so I can index the collection, etc. I have tried making SimpleValueConverter's, adapters, and a few other hacks, but I haven't been able to figure out how to make this work. It seems like such a common use-case that it would be built in. Hopefully the answer here is "It's built in! Look [here]". If it is, I haven't found it :(

Thanks!

by fbl at April 27, 2015 02:20 PM

Lobsters

TheoryOverflow

Clarification needed on a proposed algorithm in a published paper called PAU-DL

There is an algorithm called Parallel Atom-Updating Dictionary Learning (PAU-DL) in the context of image and video processing, dictionary learning. I read about it in this published paper:

Learning Overcomplete Dictionaries Based on Atom-by-Atom Updating

This sections says in order to update the whole dictionary by updating atoms one by one which might have the problem of updating the atom $d_k$ (kth column of $D$) the updated version of its previous columns is used while $d_k$,...,$d_{K}$ haven't been yet updated. Then they claim the proposed algorithm updates atoms in parallel.

I wonder how this is possible. I mean, KSVD is about updating each atom of D and then updating the corresponding row in the coefficient matrix. What do we do in the parallel form of the algorithm? Please shed some lights on the problem so that a comparison between the two algorithms could be done using the information you give in your answer. I failed to find any online sources that explain the new algorithm. Thank you

by Gigili at April 27, 2015 02:06 PM

StackOverflow

Receiving multipart messages with clrzmq

About multipart messages Zguide states:

 "You will receive all parts of a message, or none at all."

http://zguide.zeromq.org/php:chapter2#Multipart-Messages

However, on the clrzmq the method for (nonblocking) receive of multipart messages is:

/// The <paramref name="frameTimeout"/> will be used for each underlying Receive operation. If the timeout
/// elapses before the last message is received, an incomplete message will be returned. 
...
public static ZmqMessage ReceiveMessage(this ZmqSocket socket, TimeSpan frameTimeout)

https://github.com/zeromq/clrzmq/blob/master/src/ZeroMQ/SendReceiveExtensions.cs

This seems contradictory. Under what circumstances can ReceiveMessage return an incomplete message? What is the proper way to nonblockingly via clrzmq ask for the next complete multipart message so that I either get a complete multipart message or nothing at all (and the message that is possibly partly arrived is available later)?

by user2771321 at April 27, 2015 02:03 PM

scala json validation not working

I am following **Play For Scala ** for validation and parsing of Json

I am receiving a request in controller after converting it to JsValue like this

 val jsonRequest = request.body.asJson.get

i am trying to validate it like this

jsonRequest.validate[ReadArtWorkCaseClass].fold(

valid = {    
          readArtWorkCaseClass =>
            log.info("valid block")

          Ok("validation successful" )
         },     
invalid = {
           log.info("invalid block")
           errors => {               
             log.info("error block")
             BadRequest(JsError.toFlatJson(errors))
            }           
          }                    
        )

i have implemented Read for this

case class ReadArtWorkCaseClass(artworkid :String,
                                artistid :String,
                                institutionid :String ,
                                status :String,
                                groupactivityid:String,
                                details:String,
                                pricehistoryid :String,
                                sku :String,
                                dimensions :String,
                                artworkname :String,
                                artworkseries :String ,
                                workclassifier :String ,
                                genreid :String,
                                artworktype :String,
                                createddate:String)
    object ReadArtWorkCaseClass {

   implicit val artworkReads: Reads[ReadArtWorkCaseClass] = (
(JsPath \ "artWorkid").read[String] and
(JsPath \ "artistid").read[String] and
(JsPath \ "institutionid").read[String] and
(JsPath \ "activationStatus").read[String] and
(JsPath \ "groupactivityid").read[String] and
(JsPath \ "details").read[String] and
(JsPath \ "pricehistoryid").read[String] and
(JsPath \ "sku").read[String] and
(JsPath \ "dimensions").read[String] and
(JsPath \ "artworkname").read[String] and
(JsPath \ "artworkseries").read[String] and
(JsPath \ "workclassifier").read[String] and
(JsPath \ "genreid").read[String] and
(JsPath \ "artworktype").read[String] and
(JsPath \ "createddate").read[String]
)(ReadArtWorkCaseClass.apply _)
}

when i tried to validate jsonrequest by inputing empty fields it does not go into the invalid block instead runs the valid block

please guide me what is my mistake

by M.Ahsen Taqi at April 27, 2015 02:02 PM

Planet Clojure

Clojure/West 2015: Notes from Day One

Life of a Clojure Expression

  • John Hume, duelinmarkers.com (DRW trading)
  • a quick tour of clojure internals
  • giving the talk in org mode (!)
  • disclaimers: no expert, internals can change, excerpts have been mangled for readability
  • most code will be java, not clojure
  • (defn m [v] {:foo “bar” :baz v})
  • minor differences: calculated key, constant values, more than 8 key/value pairs
  • MapReader called from static array of IFns used to track macros; triggered by ‘{‘ character
  • PersistentArrayMap used for less than 8 objects in map
  • eval treats forms wrapped in (do..) as a special case
  • if form is non-def bit of code, eval will wrap it in a 0-arity function and invoke it
  • eval’s macroexpand will turn our form into (def m (fn [v] {:foo “bar :baz v}))
  • checks for duplicate keys twice: once on read, once on analyze, since forms for keys might have been evaluated into duplicates
  • java class emitted at the end with name of our fn tacked on, like: class a_map$m
  • intelli-j will report a lot of unused methods in the java compiler code, but what’s happening is the methods are getting invoked, but at load time via some asm method strings
  • no supported api for creating small maps with compile-time constant keys; array-map is slow and does a lot of work it doesn’t need to do

Clojure Parallelism: Beyond Futures

  • Leon Barrett, the climate corporation
  • climate corp: model weather and plants, give advice to farmers
  • wrote Claypoole, a parallelism library
  • map/reduce to compute average: might use future to shove computation of the average divisor (inverse of # of items) off at the beginning, then do the map work, then deref the future at the end
  • future -> future-call: sends fn-wrapped body to an Agent/soloExecutor
  • concurrency vs parallelism: concurrency means things could be re-ordered arbitrarily, parallelism means multiple things happen at once
  • thread pool: recycle a set number of threads to avoid constantly incurring the overhead of creating a new thread
  • agent thread pool: used for agents and futures; program will not exit while threads are there; lifetime of 60 sec
  • future limitations
    • tasks too small for the overhead
    • exceptions get wrapped in ExecutionException, so your try/catches won’t work normally anymore
  • pmap: just a parallel map; lazy; runs N-cpu + 3 tasks in futures
    • generates threads as needed; could have problems if you’re creating multiple pmaps at once
    • slow task can stall it, since it waits for the first task in the sequence to complete for each trip through
    • also wraps exceptions just like future
  • laziness and parallelism: don’t mix
  • core.async
    • channels and coroutines
    • reads like go
    • fixed-size thread pool
    • handy when you’ve got a lot of callbacks in your code
    • mostly for concurrency, not parallelism
    • can use pipeline for some parallelism; it’s like a pmap across a channel
    • exceptions can kill coroutines
  • claypoole
    • pmap that uses a fixed-size thread pool
    • with-shutdown! will clean up thread pool when done
    • eager by default
    • output is an eagerly streaming sequence
    • also get pfor (parallel for)
    • lazy versions are available; can be better for chaining (fast pmap into slow pmap would have speed mismatch with eagerness)
    • exceptions are re-thrown properly
    • no chunking worries
    • can have priorities on your tasks
  • reducers
    • uses fork/join pool
    • good for cpu-bound tasks
    • gives you a parallel reduce
  • tesser
    • distributable on hadoop
    • designed to max out cpu
    • gives parallel reduce as well (fold)
  • tools for working with parallelism:
    • promises to block the state of the world and check things
    • yorkit (?) for jvm profiling

Boot Can Build It

  • Alan Dipert and Micha Niskin, adzerk
  • why a new build tool?
    • build tooling hasn’t kept up with the complexity of deploys
    • especially for web applications
    • builds are processes, not specifications
    • most tools: maven, ant, oriented around configuration instead of programming
  • boot
    • many independent parts that do one thing well
    • composition left to the user
    • maven for dependency resolution
    • builds clojure and clojurescript
    • sample boot project has main method (they used java project for demo)
    • uses ‘–‘ for piping tasks together (instead of the real |)
    • filesets are generated and passed to a task, then output of task is gathered up and sent to the next task in the chain (like ring middleware)
  • boot has a repl
    • can do most boot tasks from the repl as well
    • can define new build tasks via deftask macro
    • (deftask build …)
    • (boot (watch) (build))
  • make build script: (build.boot)
    • #!/usr/bin/env boot
    • write in the clojure code defining and using your boot tasks
    • if it’s in build.boot, boot will find it on command line for help and automatically write the main fn for you
  • FileSet: immutable snapshot of the current files; passed to task, new one created and returned by that task to be given to the next one; task must call commit! to commit changes to it (a la git)
  • dealing with dependency hell (conflicting dependencies)
    • pods
    • isolated runtimes, with own dependencies
    • some things can’t be passed between pods (such as the things clojure runtime creates for itself when it starts up)
    • example: define pod with env that uses clojure 1.5.1 as a dependency, can then run code inside that pod and it’ll only see clojure 1.5.1

One Binder to Rule Them All: Introduction to Trapperkeeper

  • Ruth Linehan and Nathaniel Smith; puppetlabs
  • back-end service engineers at puppetlabs
  • service framework for long-running applications
  • basis for all back-end services at puppetlabs
  • service framework:
    • code generalization
    • component reuse
    • state management
    • lifecycle
    • dependencies
  • why trapperkeeper?
    • influenced by clojure reloaded pattern
    • similar to component and jake
    • puppetlabs ships on-prem software
    • need something for users to configure, may not have any clojure experience
    • needs to be lightweight: don’t want to ship jboss everywhere
  • features
    • turn on and off services via config
    • multiple web apps on a single web server
    • unified logging and config
    • simple config
  • existing services that can be used
    • config service: for parsing config files
    • web server service: easily add ring handler
    • nrepl service: for debugging
    • rpc server service: nathaniel wrote
  • demo app: github -> trapperkeeper-demo
  • anatomy of service
    • protocol: specifies the api contract that that service will have
    • can have any number of implementations of the contract
    • can choose between implementations at runtime
  • defservice: like defining a protocol implementation, one big series of defs of fns: (init [this context] (let …)))
    • handle dependencies in defservice by vector after service name: [[:ConfigService get-in-config] [:MeowService meow]]
    • lifecycle of the service: what happens when initialized, started, stopped
    • don’t have to implement every part of the lifecycle
  • config for the service: pulled from file
    • supports .json, .edn, .conf, .ini, .properties, .yaml
    • can specify single file or an entire directory on startup
    • they prefer .conf (HOCON)
    • have to use the config service to get the config values
    • bootstrap.cfg: the config file that controls which services get picked up and loaded into app
    • order is irrelevant: will be decided based on parsing of the dependencies
  • context: way for service to store and access state locally not globally
  • testing
    • should write code as plain clojure
    • pass in context/config as plain maps
    • trapperkeeper provides helper utilities for starting and stopping services via code
    • with-app-with-config macro: offers symbol to bind the app to, plus define config as a map, code will be executed with that app binding and that config
  • there’s a lein template for trapperkeeper that stubs out working application with web server + test suite + repl
  • repl utils:
    • start, stop, inspect TK apps from the repl: (go); (stop)
    • don’t need to restart whole jvm to see changes: (reset)
    • can print out the context: (:MeowService (context))
  • trapperkeeper-rpc
    • macro for generating RPC versions of existing trapperkeeper protocols
    • supports https
    • defremoteservice
    • with web server on one jvm and core logic on a different one, can scale them independently; can keep web server up even while swapping out or starting/stopping the core logic server
    • future: rpc over ssl websockets (using message-pack in transit for data transmission); metrics, function retrying; load balancing

Domain-Specific Type Systems

  • Nathan Sorenson, sparkfund
  • you can type-check your dsls
  • libraries are often examples of dsls: not necessarily macros involved, but have opinionated way of working within a domain
  • many examples pulled from “How to Design Programs”
  • domain represented as data, interpreted as information
  • type structure: syntactic means of enforcing abstraction
  • abstraction is a map to help a user navigate a domain
    • audience is important: would give different map to pedestrian than to bus driver
  • can also think of abstraction as specification, as dictating what should be built or how many things should be built to be similar
  • showing inception to programmers is like showing jaws to a shark
  • fable: parent trap over complex analysis
  • moral: types are not data structures
  • static vs dynamic specs
    • static: types; things as they are at compile time; definitions and derivations
    • dynamic: things as they are at runtime; unit tests and integration tests; expressed as falsifiable conjectures
  • types not always about enforcing correctness, so much as describing abstractions
  • simon peyton jones: types are the UML of functional programming
  • valuable habit: think of the types involved when designing functions
  • spec-tacular: more structure for datomic schemas
    • from sparkfund
    • the type system they wanted for datomic
    • open source but not quite ready for public consumption just yet
    • datomic too flexible: attributes can be attached to any entity, relationships can happen between any two entities, no constraints
    • use specs to articulate the constraints
    • (defspec Lease [lesse :is-a Corp] [clauses :is-many String] [status :is-a Status])
    • (defenum Status …)
    • wrote query language that’s aware of the defined types
    • uses bi-directional type checking: github.com/takeoutweight/bidirectional
    • can write sensical error messages: Lease has no field ‘lesee’
    • can pull type info from their type checker and feed it into core.typed and let core.typed check use of that data in other code (enforce types)
    • does handle recursive types
    • no polymorphism
  • resources
    • practical foundations for programming languages: robert harper
    • types and programming languages: benjamin c pierce
    • study haskell or ocaml; they’ve had a lot of time to work through the problems of types and type theory
  • they’re using spec-tacular in production now, even using it to generate type docs that are useful for non-technical folks to refer to and discuss; but don’t feel the code is at the point where other teams could pull it in and use it easily

ClojureScript Update

  • David Nolen
  • ambly: cljs compiled for iOS
  • uses bonjour and webdav to target ios devices
  • creator already has app in app store that was written entirely in clojurescript
  • can connect to device and use repl to write directly on it (!)

Clojure Update

  • Alex Miller
  • clojure 1.7 is at 1.7.0-beta1 -> final release approaching
  • transducers coming
  • define a transducer as a set of operations on a sequence/stream
    • (def xf (comp (filter? odd) (map inc) (take 5)))
  • then apply transducer to different streams
    • (into [] xf (range 1000))
    • (transduce xf + 0 (range 1000))
    • (sequence xf (range 1000))
  • reader conditionals
    • portable code across clj platforms
    • new extension: .cljc
    • use to select out different expressions based on platform (clj vs cljs)
    • #?(:clj (java.util.Date.)
      :cljs (js/Date.))
    • can fall through the conditionals and emit nothing (not nil, but literally don’t emit anything to be read by the reader)
  • performance has also been a big focus
    • reduced class lookups for faster compile times
    • iterator-seq is now chunked
    • multimethod default value dispatch is now cached

by Mindbat at April 27, 2015 02:00 PM

StackOverflow

Clojure threading first macro -> with Math/pow or any other multiple args function

How to write in one line the following code:

(-> 10 pow9)

where pow9 is:

(def pow9 (partial #(Math/pow % 9)))

If I write (-> 10 (partial #(Math/pow % 9))) I will get back #<core$partial$fn__4228 clojure.core$partial$fn__4228@62330c23> instead, writing (-> 10 #(Math/pow % 9)) fails with CompilerException java.lang.ClassCastException: java.lang.Long cannot be cast to clojure.lang.ISeq, compiling:(NO_SOURCE_PATH:1:1),

although (-> 10 pow9) works fine.

The more general question is how to use -> with function which accepts more than one argument, i.e. how to make this work (-> 10 #(+ % 10))?

by zshamrock at April 27, 2015 01:59 PM

How to call a method in a catch clause on an object defined in a try clause?

I am creating a redis pubsub client in a try-catch block. In the try block, the client is initialised with a callback to forward messages to a client. If there's a problem sending the message to the client, an exception will be thrown, in which case I need to stop the redis client. Here's the code:

try {
  val redisClient = RedisPubSub(
    channels = Seq(currentUserId.toString),
    patterns = Seq(),
    onMessage = (pubSubMessage: PubSubMessage) => {
      responseObserver.onValue(pubSubMessage.data)
    }
  )
}
catch {
  case e: RuntimeException =>
    // redisClient isn't defined here...
    redisClient.unsubscribe(currentUserId.toString)
    redisClient.stop()
    messageStreamResult.complete(Try(true))
    responseObserver.onCompleted()
}

The problem is that the redis client val isn't defined in the catch block because there may have been an exception creating it. I also can't move the try-catch block into the callback because there's no way (that I can find) of referring to the redisClient object from within the callback (this doesn't resolve).

To solve this I'm instantiating redisClient as a var outside the try-catch block. Then inside the try block I stop the client and assign a new redisPubSub (created as above) to the redisClient var. That's an ugly hack which is also error prone (e.g. if there genuinely is a problem creating the second client, the catch block will try to call methods on an erroneous object).

Is there a better way of writing this code so that I can correctly call stop() on the redisClient if an exception is raised when trying to send the message to the responseObserver?

Update

I've just solved this using promises. Is there a simpler way though?

by jbrown at April 27, 2015 01:57 PM

Clojure Architecture like Uncle Bob did

I am trying to implement Clojure architecture like Uncle Bob did there http://blog.8thlight.com/uncle-bob/2012/08/13/the-clean-architecture.html and like he describe in clean code in Episode 07 - Architecture, Use Cases and High Level Design.

Nothing in an inner circle can know anything at all about something in an outer circle.

enter image description here enter image description here

I want to code core of app with all business rules and tests. This core has to have definitions of operations on "objects" in database like user, payment, advertisement etc. But the implementation of how this should be done has to be on higher level of application.

So the question is: can you give me an example of good architecture application on github like on image with circles? I am learning Clojure and I want to see how technically it can be done. I am trying to do it myself but I have bad results. Simple example of code will help me a lot. I want to know how create layers in Clojure like on image step by step.

I will be glad for any information on how to do that with high quality in Clojure. Can be code, video or article. Can be free or can buy.

by kabra at April 27, 2015 01:49 PM

QuantOverflow

Creating correlated Brownian motions from independent ones

This is the exercise from book "Stochastic Calculate for Finance Volume 2", Page 199.

Let $(W_{1}(t),...,W_{d}(t))$ be a d-dimensional Brownian motion. $(\sigma_{ij}(t))_{m\times d}$ be an matrix-valued process adapted to the filtration associated with Brownian motion.

Define $$\sigma_{i}(t) = [\sum_{j=1}^{d}\sigma_{ij}^{2}]^{1/2}$$ and assume this is never zero. Define also $$ B_{i}(t) = \sum_{j=1}^{d} \int_{0}^{t} \frac{\sigma_{ij}(u)}{\sigma_{i}(u)}dW_{j}(u) $$ Show that, for each i, $B_{i}$ is a Brownian motion.

by Ben DAI at April 27, 2015 01:48 PM

Validation of Bates SVJ model

I have just finished implementing the Bates model for pricing European call options.

To check results, I have been looking for a validation set where I could see the Bates parameter values and their associated call prices. However, I have not been able to obtain this information online.

Could anyone point me to any paper or reference where parameter values and call prices for the Bates model are given?

by sets at April 27, 2015 01:47 PM

StackOverflow

Compojure handler friend/authenticate eats body of POST request

How can I safely get the content of the :body InputStream from compojure?

See related but different question for background.

I'm trying to authenticate my ring routes with Friend using compojure handler/site but when I try to read the :body from an http POST request (which is a Java InputStream), it is closed:

23:01:20.505 ERROR [io.undertow.request] (XNIO-1 task-3) Undertow request failed HttpServerExchange{ POST /paypal/ipn}
java.io.IOException: UT000034: Stream is closed
    at io.undertow.io.UndertowInputStream.read(UndertowInputStream.java:84) ~[undertow-core-1.1.0.Final.jar:1.1.0.Final]
    at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284) ~[na:1.8.0_45]
    at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326) ~[na:1.8.0_45]
    at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178) ~[na:1.8.0_45]
    at java.io.InputStreamReader.read(InputStreamReader.java:184) ~[na:1.8.0_45]
    at java.io.BufferedReader.fill(BufferedReader.java:161) ~[na:1.8.0_45]
    at java.io.BufferedReader.read(BufferedReader.java:182) ~[na:1.8.0_45]
    at clojure.core$slurp.doInvoke(core.clj:6650) ~[clojure-1.7.0-beta1.jar:na]
    at clojure.lang.RestFn.invoke(RestFn.java:410) ~[clojure-1.7.0-beta1.jar:na]

If I remove the handler, the problem goes away. I've found one possible solution called groundhog that captures and stores all requests. The library I'm using, clojure-paypal-ipn originally called reset on the stream, but that is not supported by Undertow (or indeed several other Java/Clojure servers), so I worked around it.

Here is a related discussion with weavejester, author of compojure.

Here are some snippets of my code:

(defroutes routes
  ...
  (POST "/paypal/ipn" [] (payment/paypal-ipn-handler 
                          payment/paypal-data 
                          payment/paypal-error 
                          paypal-sandbox?))
  (route/resources "/"))

(defn authenticate-routes
  "Add Friend handler to routes"
  [routes-set]
  (handler/site
    (friend/authenticate routes-set friend-settings)))

;; middleware below from immutant.web.middleware
(defn -main [& {:as args}]
  (web/run
    (-> routes
      (web-middleware/wrap-session {:timeout 20})

      (authenticate-routes) ; use friend handler

      ;; wrap the handler with websocket support
      ;; websocket requests will go to the callbacks, ring requests to the handler
      (web-middleware/wrap-websocket websocket-callbacks))
    args))

And here are the guts of payment.clj (paypal-data and paypal-error just pprint input right now):

(defn req->body-str [req]
  "Get request body from ring POST http request"
  (let [input-stream (:body req)]
      (let [raw-body-str (slurp input-stream)]
          raw-body-str)))

(defn paypal-ipn-handler
  ([on-success on-failure] (paypal-ipn-handler on-success on-failure true))
  ([on-success on-failure sandbox?]
   (fn [req]
     (let [body-str (req->body-str req)
           ipn-data (paypal/parse-paypal-ipn-string body-str)]
       (do
         (.start (Thread. (fn [] (paypal/handle-ipn ipn-data on-success on-failure sandbox?))))
         ; respond to PayPal right away, then go and process the ipn-data
         {:status  200
          :headers {"Content-Type" "text/html"}
          :body    ""})))))

by sventechie at April 27, 2015 01:39 PM

How can I find the index of the maximum values along rows of matrix Spark Scala?

I have a question about finding index of the maximum values along rows of matrix. How can I do this in Spark Scala? This function would be like argmax in numpy in Python.

by Татьяна Паскевич at April 27, 2015 01:38 PM

CompsciOverflow

Example of worst case input for Build-Max-Heap

Is there a worst-case inputs for Build-Max-Heap?

I know there is but I just couldn't paint a clear picture of it in my head.

by iterence at April 27, 2015 01:36 PM

StackOverflow

Are there any pitfalls to using Any vs specific argument type?

Suppose I want a method to output something, a string or integer in this case. I can do it like this:

def outString(str: String) {
    str // or "return str"
}

and run it like this: outString("foo")

But I can also avoid initializing a specific type as an argument and it'll work:

def outString(str: Any) {
    str
}

and run it either like this: outString("foo") or outString(123).

Given that they both work and assuming a situation where you don't always know the type of the argument passed, is there any pitfalls in using Any over specific argument types? Does Any do any type of automatic type checking like interpreted languages do that would slow the code down?

by Alper Turan at April 27, 2015 01:26 PM

spray akka deployment on webserver

I have an application built on spray + akka. using this guide:

http://sysgears.com/articles/building-rest-service-with-scala/

It explains this example: https://github.com/oermolaev/simple-scala-rest-example

The application is working just fine. But when trying to deploy on a webServer I didn't find a way to do that.

I've tried to use xsbt-web-plugin to deploy on Tomcat, got the following input:

 ~container:start

[info] starting server ... Adding Context for target/webapp ...

Starting service Tomcat Starting Servlet Engine:

Apache Tomcat/7.0.34 org.apache.catalina.startup.ContextConfig

getDefaultWebXmlFragment INFO: No global web.xml found

org.apache.coyote.AbstractProtocol start INFO: Starting

ProtocolHandler ["http-nio-8080"]

But Tomcat is returning 404 for all the requests.

Does someone know how can I deploy a spray akka application on Tomcat?

by griffon vulture at April 27, 2015 01:09 PM

infra-talk

Practical fault detection on timeseries part 2: first macros and templates

In the previous fault detection article, we saw how we can cover a lot of ground in fault detection with simple methods and technology that is available today. It had an example of a simple but effective approach to find sudden spikes (peaks and drops) within fluctuating time series. This post explains the continuation of that work and provides you the means to implement this yourself with minimal effort. I'm sharing with you:
  • Bosun macros which detect our most common not-trivially-detectable symptoms of problems
  • Bosun notification template which provides a decent amount of information
  • Grafana and Graph-Explorer dashboards and integration for further troubleshooting
We reuse this stuff for a variety of cases where the data behaves similarly and I suspect that you will be able to apply this to a bunch of your monitoring targets as well.
read more

by Dieter on the web at April 27, 2015 01:05 PM

StackOverflow

Scala - Does pattern matching break the Open-Closed principle? [duplicate]

This question already has an answer here:

First of all, I know this question has been asked previously here, but it wasn't clear for me.

Pattern matching is used to make a function react to different types of data. One would say that if my Pattern Matching case has 4 cases and one month later I need to add a 5th one, I'll be breaking the Open-Closed Principle. I agree to that.

In a worst case scenario: Let's suppose I'm using a closed library (I can't touch code inside it) and I need to extend its functionality. The functionality I want to extend is indeed a Pattern Matching function. What should I do?

I think pattern matching is OK if I'm totally sure my Object doesn't change very often and will never require to be extended by others.

What's your opinion about using this technique? This is more like a debate than a question.

Thanks,

by Santiago Ignacio Poli at April 27, 2015 01:02 PM

i want to store each rdd into database in twitter streaming using apache spark but got error of task not serialize in scala

i write a code in which twitter streaming take a rdd of tweet class and store each rdd in database but it got error task not serialize i past the code here pls help me

sparkstreaming.scala

case class Tweet(id: Long, source: String, content: String, retweet: Boolean, authName: String, username: String, url: String, authId: Long, language: String)

trait SparkStreaming extends Connector {

  def startStream(appName: String, master: String): StreamingContext = {
    val db = connector("localhost", "rmongo", "rmongo", "pass")
    val dbcrud = new DBCrud(db, "table1")
    val sparkConf: SparkConf = new SparkConf().setAppName(appName).setMaster(master).set(" spark.driver.allowMultipleContexts", "true").set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
    //  .set("spark.kryo.registrator", "HelloKryoRegistrator")
    //    sparkConf.registerKryoClasses(Array(classOf[DBCrud]))
    val sc: SparkContext = new SparkContext(sparkConf)
    val ssc: StreamingContext = new StreamingContext(sc, Seconds(10))
    ssc
  }
}
object SparkStreaming extends SparkStreaming

i use this streaming context in plat controller to store tweets in database but its throw exception i m using mongodb to store it

def streamstart = Action {
    val stream = SparkStreaming
    val a = stream.startStream("ss", "local[2]")
    val db = connector("localhost", "rmongo", "rmongo", "pass")
    val dbcrud = DBCrud
    val twitterauth = new TwitterClient().tweetCredantials()
    val tweetDstream = TwitterUtils.createStream(a, Option(twitterauth.getAuthorization))
    val tweets = tweetDstream.filter { x => x.getUser.getLang == "en" }.map { x => Tweet(x.getId, x.getSource, x.getText, x.isRetweet(), x.getUser.getName, x.getUser.getScreenName, x.getUser.getURL, x.getUser.getId, x.getUser.getLang) }
    //  tweets.foreachRDD { x => x.foreach { x => dbcrud.insert(x) } }
    tweets.saveAsTextFiles("/home/knoldus/sentiment project/spark services/tweets/tweets")
    //    val s=new BirdTweet() 
    //    s.hastag(a.sparkContext)
    a.start()
    Ok("start streaming")
  }

when make a single of streaming which take a tweets and use forEachRDD to store each tweet then its work but if i use it from outside its not works please help me

by Sandeep Purohit at April 27, 2015 12:57 PM

CompsciOverflow

Is a LBA with stack more powerful than a LBA without?

Even so a linear bounded automata (LBA) is strictly more powerful than a pushdown automata (PDA), adding a stack to a LBA might make it more powerful.

A LBA with stack should not be Turing complete, because its halting problem should be decidable. (It seems to be equivalent to the halting problem for a PDA).

Can a deterministic LBA with stack decide all problems decided by a nondeterministic LBA?

Or on the other hand, maybe a LBA with stack is (provably) not more powerful than a LBA?


Edit I think I found out how a deterministic LBA with stack can simulate a nondeterministic LBA. The stack can be used to store and recall the current state of the external memory (the linear bounded memory) as often as needed. The internal state is finite, so there is a global bound for the maximal number of nondeterministic moves available in a single step. So backtracking can be used to recursively simulate the result of each nondeterministic move.

I will have to think about how to make these two observations (halting problem is decidable, nondeterministic LBA can be simulated deterministically) more rigorous, before self-answering. I'm away next week, so don't hold your breath.

by Thomas Klimpel at April 27, 2015 12:51 PM

Planet Theory

Whales of the Web

The average web site has links connecting it with 29 other sites. I came up with this number in the following way. A data set I’ve been playing with for a few weeks lists 43 million web sites and 623 million links between sites; the quotient of those numbers is about 14.5. Since each link has two ends, the per-site total of inbound plus outbound links is double the quotient.

Twenty-nine links is the average, but by no means is it a typical number of links. Almost half of the sites have four or fewer links. At the other end of the spectrum, the most-connected web site (blogspot.com) has almost five million links, and there are six more sites with at least a million each. The distribution of link numbers—the degree sequence—looks like this (both scales are logarithmic, base 2):

Degree sequence of the WWW

I want to emphasize that these are figures for web sites, not web pages. The unit of aggregation is the “pay-level domain”—the domain name you have to pay to register. Examples are google.com or bbc.co.uk. Subdomains, such as maps.google.com, are all consolidated under the main google.com entry. Any number of links from pages on site A to pages on site B are recorded as a single link from A to B.

The source of these numbers is the Web Data Commons, a project overseen by a group at the University of Mannheim. They extracted the lists of domains and the links between them from a 2012 data set compiled and published by the Common Crawl Foundation (which happens to be the subject of my latest American Scientist column). The Common Crawl does essentially the same thing as the big search engines—download the whole Web, or some substantial fraction of it—but the Common Crawl makes the raw data publicly available.

There are interesting questions about both ends of the degree sequence plotted above. At the far left, why are there so many millions of lonely, disconnected web sites, with just one or two links, or none at all? I don’t yet feel I know enough to tell the story of those orphans of the World Wide Web. I’ve been focused instead on the far right of the graph, on the whales of the Web, the handful of sites with links to or from many thousands of other sites.

From the set of 43 million sites, I extracted all those with at least 100,000 inbound or outbound links; in other words, the criterion for inclusion in my sample was \(\min(indegree, outdegree) \ge 100,000\). It turns out that just 112 sites qualify. In the diagram below, they are grouped according to their top-level domain (com, org, de, and so on). The size of the colored dot associated with each site encodes the total number of links; the color indicates the percent of those links that are incoming. Hover over a site name to see the inbound, outbound and bidirectional links between that site and the other members of this elite 112. (The diagram was built with Mike Bostock’s d3.js framework, drawing heavily on this example.)

Patience, please . . .

The bright red dots signify a preponderance of outgoing links, with relatively few incoming ones. Many of these sites are directories or catalogs, with lists of links classified by subject matter. Such “portal sites” were popular in the early years of the Web, starting with the World Wide Web Home at CERN, circa 1994; another early example was Jerry and David’s Guide to the World Wide Web, which evolved into Yahoo. Search engines have swept aside many of those hand-curated catalogs, but there are still almost two dozen of them in this data set. Curiously, the Netherlands and Germany (nl and de) seem to be especially partial to hierarchical directories.

Bright blue dots are rarer than red ones; it’s easier to build a site with 100,000 outbound links than it is to persuade 100,000 other sites to link to yours. The biggest blue dot is for wordpress.org, and I know the secret of that site’s popularity. If you have a self-hosted WordPress blog (like this one), the software comes with a built-in link back to home base.

Another conspicuous blue dot is gmpg.org, which mystified me when I first noticed that it ranks fourth among all sites in number of incoming links. Having poked around at the site, I can now explain. GMPG is the Global Multimedia Protocols Group, a name borrowed from the Neal Stephenson novel Snow Crash. In 2003, three friends created a real-world version of GMPG as a vehicle for the XHTML Friends Network, which was conceived as a nonproprietary social network. One of the founders was Matt Mullenweg, who was also the principal developer of WordPress. Hence every copy of WordPress includes a link to gmpg.org. (The link is in the <head> section of the HTML file, so you won’t see it on the screen.) At this point GMPG looks to be a moribund organization, but nonetheless more than a million web sites have links to it.

Networkadvertising.org is the web site of a trade group for online advertisers. Presumably, its 143,863 inbound links are embedded in ads, probably in connection with the association’s opt-out program for behavioral tracking. (To opt out, you have to accept a third-party cookie, which most people concerned about privacy would refuse to do.)

Still another blue-dot site, miibeian.gov.cn, gets its inward links in another way. If I understand correctly, all web sites hosted in China are required to register at miibeian.gov.cn, and they must place a link back to that site on the front page. (If this account is correct, the number of inbound links to miibeian.gov.cn tells us the number of authorized web sites in China. The number in the 2012 data is 289,605, which seems low.)

One final observation I find mildly surprising: Measured by connectivity, these 112 sites are the largest on the entire Web, and you might think they would be reasonably stable over time. But in the three years since the data were collected, 10 percent of the sites have disappeared altogether: Attempts to reach them either time out or return a connection error. At least a few more sites have radically changed their character. For example, serebella.com was a directory site that had almost 700,000 outbound links in 2012; it is now a domain name for sale. Among web sites, it seems, nobody is too big to fail.

The table below lays out the numbers for the 112 sites. It’s sortable: Click on any of the column headers to sort on that field; click again to reverse the ordering. If you’d like to play with the data yourself, download the JSON file.

site inlinks outlinks total links % inbound

by Brian Hayes at April 27, 2015 12:45 PM

Lobsters

StackOverflow

How to convert a Scala Array[Byte] to Java byte[]?

I have an Akka application with actors written in Scala and others in Java. In one case a Scala Actor writes an Array[Byte] and I need to deserialize this from a Java Actor. In this use-case I ultimately need a String representation in Java of the Array[Byte] so that would also solve my problem.

Scala Actor:

val outputStream = new java.io.ByteArrayOutputStream()
val bufferedOutputStream = new java.io.BufferedOutputStream(outputStream, 1024)
val exitCode : Integer = processBuilder #> bufferedOutputStream !
bufferedOutputStream.flush
val content = outputStream.toByteArray // this gives an Array[Byte]
javaActorRef.tell(content, getSelf())

Java Actor:

/**
 * {@inheritDoc}
 */
@Override
public void onReceive(Object object) throws Exception {
    // object has a Scala Array[Byte] how do I convert here to 
    // byte[] or to String?

by Giovanni Azua at April 27, 2015 12:36 PM

On the 4Clojure website, how do you see your previous problems, so you can return to one of them?

I'm doing problems at 4Clojure. Some how, when I finished a problem (around #22), a link appeared that jumped me to problem #35. I want to do the problems that were skipped. But when I'm logged in to 4Clojure, the website only shows me problems after the last one I completed, which is #34.

Is there way to see all the problems (completed and uncompleted) at 4Clojure, so I can do the skipped ones?

by devdanke at April 27, 2015 12:33 PM

TheoryOverflow

Entropy of sum of two dependent variables

How do I bound entropy of sum of two variables which are:

  1. Fully Dependent( Circularly rotated of same series)

$ H(X(N)+X(N-\tau))$

  1. Partially dependent random variables $X$ and $Y$?

by Jay Prakash at April 27, 2015 12:13 PM

/r/netsec

QuantOverflow

How to differentiate a brownian motion?

By definition a wiener process cannot be differentiated.

But when we use Ito's lemma on F = X^2, where X is wiener process

we have total change in

dF = 2XdX + dt

How can we calculate dF/dX when by definition it cannot be differentiated? Isin't this contradiction by definition?

by user139258 at April 27, 2015 12:02 PM

Planet Emacsen

Irreal: Some Dired Tips

Marcin Borkowski (mbork) has a nice post on Emacs dired tips. Dired is amazingly powerful but most of us don't begin to take advantage of its capabilities. Mbork looks at a few things you can do to enhance your workflow with dired. Many of them I was (and you probably are) familiar with but I did learn about how to omit “uninteresting” files.

He covers several areas so you should take a look. If you learn even one thing you didn't know, it will be time well spent.

by jcs at April 27, 2015 11:55 AM

StackOverflow

pattern for return values

In validate methods what kind of return types or return value patterns are preferred?

If a validate function returns success/failure and failed fields as a result, should the function return, array with {true/false, array of error fields]} or true on success and array of error fields in case of failure

by hsen at April 27, 2015 11:36 AM

Spray marshalling collection of ToResponseMarshallable

I have a simple Spray scenario that doesn't work as I've expected.

I have a spray server that dispatches work to different modules, and then return their composed responses.

As I don't want to limit Module responses, I chose that the Module's method return ToResponseMarshallable, as I have Modules that need to return a plain String.

Module signature could be:

def moduleProcessing(): ToResponseMarshallable = randomString()

And my "complete" block look similar to this:

complete {   

    val response1 = moduleProcessing()
    val response2 = moduleProcessing()
    Seq(response1,response2) 
}

In the previous example, I would expect to get:

[{"someRandomString"},{"anotherRandomString"}]

But I am getting:

[{},{}]

Of course it will propagate as expected, if I return a single response or if I change the signature of the moduleProcessing return type to any Marshallable Type.

Thanks in advance for your help!

by uris at April 27, 2015 11:33 AM

CompsciOverflow

How do I start extract standards from data?

I have a database which has machine data ( tables like: machine cycles, machine phases and many more ) and I am suppose to analysis this data. Meaning, with the data present in different tables, I have to filter out or rather define standard machine cycles, standard machine phase etc. Hence, can you please help me with the starting point, like, how should I start my analysis in defining standard cycles, standard phase of machines with the present data available in the database.

by user3483952 at April 27, 2015 11:09 AM

Multiclass classification with growing number of classes

I have a dataset of music listening history: when it was listened, where it was listened, what was the weather outside (and many more other features are coming soon) and a track_id as a label.

Listening history

I want to run a multiclass classification on this data but I have these problems:

  • Constantly mapping my track_ids to classes [0..distinct_trackid_count) and back
  • I have a huge number of classes (tens of thousands)
  • The number of classes is constantly growing, so I always have to retrain my algorithm from the start

I have a feeling that multiclass classification is not what I need here, and I need help in figuring out how to approach this problem

by Nizamutdinov Adel at April 27, 2015 11:08 AM

UnixOverflow

How to measure PCI-Express bus usage?

I'm looking for a way to find out if PCIe bus is the bottleneck or not.

It's not a problem to measure how much bytes was transferred through any particular NIC:

enter image description here

Is there a way to find how much data was transferred to all the other PCIe devices (hard drives, video cards, etc.)?

by Anthony Ananich at April 27, 2015 11:07 AM

/r/compsci

Powerpoint slides for this book?

I am looking for powerpoint slides of Computer Graphics: Principles and Practice, 3rd edition. Are these available for the entire book? Like these ones -http://cs.brown.edu/courses/cs123/lectures/CS123.L23_Acceleration_Data_Structures_11.6.14.pdf

submitted by Ganda2
[link] [comment]

April 27, 2015 11:02 AM

DataTau

StackOverflow

Spray, ScalaTest and HTTP services: could not find implicit value for parameter ta:

I am, again, struggling with spray and cannot set up a test correctly. I looked at the similar question spray-testkit: could not find implicit value for parameter ta: and the official spray template and cannot figure out what I am missing and/or doing wrong.

I have a very simple service:

trait SimpleService extends HttpService{

  val fooRoute = path("foo") {
    get {
      complete("bar")
    }
  }
}

And I have a very simple test:

class SimpleServiceTest extends FlatSpec with Matchers with SimpleService with ScalatestRouteTest {

  override implicit def actorRefFactory: ActorRefFactory = system

  "The service" should "return OK status when getting /foo" in {
    Get("/foo") ~> fooRoute ~> check {
      status should be(OK)
    }
  }
}

when I try to compile this, I get the following error:

Error:(17, 17) could not find implicit value for parameter ta: SimpleServiceTest.this.TildeArrow[spray.routing.RequestContext,Unit]
Get("/foo") ~> fooRoute ~> check {
            ^

Can anyone help me and tell me what I am missing? I do not find anything unusual, and I am close to evaluating Play instead of spray.

by rabejens at April 27, 2015 10:55 AM

spray-can webservice graceful shutdown

I have spray.io based webservice, it runs as standalone jar (I use sbt assembly and then just java -jar myws.jar). It has pretty the same bootsrap as in spray examples, like this:

/** Bootstrap */
object Boot extends App {
   // we need an ActorSystem to host our application in
   implicit val system = ActorSystem("my-system")

   // create and start our service actor
   val service = system.actorOf(Props[MyServiceActor], "my-ws-service")

   implicit val timeout = Timeout(10.seconds)

   CLIOptionsParser.parse(args, CLIOptionsConfig()) map { config =>
     // start a new HTTP server
     IO(Http) ? Http.Bind(service, interface = config.interface, port = config.port)
   }
}

Now I just run the process in the backgroud with java -jar my-service "$@" & and stop with kill -9 pid.

I'd like to stop my webservice gracefully, meaning that it finishes open connections and refuses new ones.

Spray-can page on github recommends to send it an Akka PoisonPill message. Ideally I'd like to initiate it from command line, as simple as possible. I thought maybe to attach one more HTTP server instance bound to localhost only, and having some rest methods to stop, and maybe diagnose the webservice. Is it feasible? What are the other options?

UPDATE: I added what I can imagine have to work, based on answers, but it seems not to, at least I've never seen any message I expected to see in stdout or log. Actually, I've tried in variations HttpUnbind, PoisonPill, together and by one. May anyone with a hard akka eye look at this? PS. The hook itself is called successfully, checked it. Signal I send to jvm is SIGTERM.

/* Simple reaper actor */
class Reaper(refs: ActorRef*) extends Actor {
  private val log = Logging(context.system, this)
  val watched = ArrayBuffer(refs: _*)

  refs foreach context.watch

  final def receive = {
    case Terminated(ref) =>
      watched -= ref
      log.info(s"Terminated($ref)")
      println(s"Terminated($ref)")
      if (watched.isEmpty) {
        log.info("Shutting dow the system")
        println("Shutting dow the system")
        system.shutdown()
      }
  }
}


// termination hook to gracefully shutdown the service
Runtime.getRuntime.addShutdownHook(new Thread() {
  override def run() = {
    val reaper = system.actorOf(Props(new Reaper(IO(Http), service)))
    //IO(Http) ? Http.Unbind(5.minutes)
    IO(Http) ! PoisonPill
  }
})

UPDATE2: So, somehow it works, namely - when PoisonPill is sent all current HTTP connections got closed. But I'd rather to stop receiveing new connections, and wait for open to return response and close.

VERDICT: It seems that akka has its own hook, because, despite my hook gets executed, actors got killed and all connections got closed without my actions. If someone will offer solution with JVM shutdown hook it will be great. I suggest that this is important problem, and very sadly it has no any good recipe online. For a meanwhile I will try to implement graceful shutdown using tcp/http.

by dmitry at April 27, 2015 10:51 AM

joda DateTime format cause null pointer error in spark RDD functions

The exception message as following

User class threw exception: Job aborted due to stage failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 11, 10.215.155.82): java.lang.NullPointerException at org.joda.time.tz.CachedDateTimeZone.getInfo(CachedDateTimeZone.java:143) at org.joda.time.tz.CachedDateTimeZone.getOffset(CachedDateTimeZone.java:103) at org.joda.time.format.DateTimeFormatter.printTo(DateTimeFormatter.java:676) at org.joda.time.format.DateTimeFormatter.printTo(DateTimeFormatter.java:521) at org.joda.time.format.DateTimeFormatter.print(DateTimeFormatter.java:625) at org.joda.time.base.AbstractDateTime.toString(AbstractDateTime.java:328) at com.tencent.ieg.face.demo.DateTimeNullReferenceReappear$$anonfun$3$$anonfun$apply$1.apply(DateTimeNullReferenceReappear.scala:41) at com.tencent.ieg.face.demo.DateTimeNullReferenceReappear$$anonfun$3$$anonfun$apply$1.apply(DateTimeNullReferenceReappear.scala:41) at scala.collection.TraversableLike$$anonfun$groupBy$1.apply(TraversableLike.scala:328) at scala.collection.TraversableLike$$anonfun$groupBy$1.apply(TraversableLike.scala:327) at scala.collection.Iterator$class.foreach(Iterator.scala:727) at org.apache.spark.util.collection.CompactBuffer$$anon$1.foreach(CompactBuffer.scala:113) at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at org.apache.spark.util.collection.CompactBuffer.foreach(CompactBuffer.scala:28) at scala.collection.TraversableLike$class.groupBy(TraversableLike.scala:327) at org.apache.spark.util.collection.CompactBuffer.groupBy(CompactBuffer.scala:28) at com.tencent.ieg.face.demo.DateTimeNullReferenceReappear$$anonfun$3.apply(DateTimeNullReferenceReappear.scala:41) at com.tencent.ieg.face.demo.DateTimeNullReferenceReappear$$anonfun$3.apply(DateTimeNullReferenceReappear.scala:40) at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) at scala.collection.Iterator$$anon$10.next(Iterator.scala:312) at scala.collection.Iterator$class.foreach(Iterator.scala:727) at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47) at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273) at scala.collection.AbstractIterator.to(Iterator.scala:1157) at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265) at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157) at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252) at scala.collection.AbstractIterator.toArray(Iterator.scala:1157) at org.apache.spark.rdd.RDD$$anonfun$26.apply(RDD.scala:1081) at org.apache.spark.rdd.RDD$$anonfun$26.apply(RDD.scala:1081) at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1314) at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1314) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61) at org.apache.spark.scheduler.Task.run(Task.scala:56) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744)

My code as following:

package com.tencent.ieg.face.demo

import org.apache.hadoop.conf.Configuration
import org.apache.spark.rdd.RDD
import org.apache.spark.SparkContext._
import org.apache.spark.{ SparkConf, SparkContext }
import org.joda.time.DateTime
import org.joda.time.format.{ DateTimeFormat, DateTimeFormatter }

import com.tencent.ieg.face.{ FaceConf, TestConf }
import com.tencent.ieg.face.{ FaceTest, FaceDrive, MainApp }
import com.tencent.ieg.face.game.Login
import com.tencent.ieg.face.utils.RDDImplicits._
import com.tencent.ieg.face.utils.TdwUtils

import com.tencent.tdw.spark.api.TDWSparkContext

object DateTimeNullReferenceReappear extends App {

  case class Record(uin: String = "", date: DateTime = null, value: Double = 0.0) 

  val cfg = new Configuration
  val sparkConf = new SparkConf()
  sparkConf.setAppName("bourne_exception_reappear")
  val sc = new SparkContext(sparkConf)

val data = TDWSparkContext.tdwTable(   // this function just read data from an data warehouse
  sc,
  tdwuser = FaceConf.TDW_USER,
  tdwpasswd = FaceConf.TDW_PASSWORD,
  dbName = "my_db",
  tblName = "my_table",
  parts = Array("p_20150323", "p_20150324", "p_20150325", "p_20150326", "p_20150327", "p_20150328", "p_20150329"))
  .map(row => {
    Record(uin = row(2),
      date = DateTimeFormat.forPattern("yyyyMMdd").parseDateTime(row(0)),
      value = row(4).toDouble)
  }).map(x => (x.uin, (x.date, x.value)))
  .groupByKey
  .map(x => {
    x._2.groupBy(_._1.toString("yyyyMMdd")).mapValues(_.map(_._2).sum)   // throw exception here
  })

//      val data = TDWSparkContext.tdwTable(  // It works, as I don't user datetime toString in the groupBy 
//      sc,
//      tdwuser = FaceConf.TDW_USER,
//      tdwpasswd = FaceConf.TDW_PASSWORD,
//      dbName = "hy",
//      tblName = "t_dw_cf_oss_tblogin",
//      parts = Array("p_20150323", "p_20150324", "p_20150325", "p_20150326", "p_20150327", "p_20150328", "p_20150329"))
//      .map(row => {
//        Record(uin = row(2),
//          date = DateTimeFormat.forPattern("yyyyMMdd").parseDateTime(row(0)),
//          value = row(4).toDouble)
//      }).map(x => (x.uin, (x.date.toString("yyyyMMdd"), x.value)))
//      .groupByKey
//      .map(x => {
//        x._2.groupBy(_._1).mapValues(_.map(_._2).sum)
//      })

  data.take(10).map(println)

}

So, it seems that call toString in the groupBy cause the exception, so can anybody explain it?

Thanks

by bourneli at April 27, 2015 10:49 AM

Lobsters

TheoryOverflow

Proof of Kolmogorov complexity is uncomputable using reductions

I am looking for a proof that Kolmogorov complexity is uncomputable using a reduction from another uncomputable problem. The common proof is a formalization of Berry's paradox rather than a reduction, but there should be a proof by reducing from something like the Halting Problem, or Post's Correspondence Problem.

by Krishna Chikkala at April 27, 2015 10:34 AM

StackOverflow

Option fields in Scala

I have 2 RDD's that I joined them together using left join. As a result, the fields of the right RDD are now defined as Option as they might be None (null). when writing the result to a file it looks something like this: Some(value) for example: Some('value1'), Some('Value2'). How can I remove the 'Some' / remove the Option from the field definition?

by UserIL at April 27, 2015 10:02 AM

SBT/Scala: macro implementation not found

I tried my hand on macros, and I keep running into the error

macro implementation not found: W [error] (the most common reason for that is that you cannot use macro implementations in the same compilation run that defines them)

I believe I've set up a two pass compilation with the macro implementation being compiled first, and the usage second.

Here is part of the /build.sbt:

lazy val root = (project in file(".")).
  settings(rootSettings: _*).
  settings(name := "Example").
  aggregate(macros, core).
  dependsOn(macros, core)

lazy val macros = (project in file("src/main/com/example/macros")).
  settings(macrosSettings: _*).
  settings(name := "Macros")

lazy val core = (project in file("src/main/com/example/core")).
  settings(coreSettings: _*).
  settings (name := "Core").
  dependsOn(macros)


lazy val commonSettings = Seq(
  organization := Organization,
  version := Version,
  scalaVersion := ScalaVersion
)

lazy val rootSettings = commonSettings ++ Seq(
  libraryDependencies ++= commonDeps ++ rootDeps ++ macrosDeps ++ coreDeps
)

lazy val macrosSettings = commonSettings ++ Seq(
  libraryDependencies ++= commonDeps ++ macrosDeps
)

lazy val coreSettings = commonSettings ++ Seq(
  libraryDependencies ++= commonDeps ++ coreDeps
)

The macro implementation looks like this:

/src/main/com/example/macros/Macros.scala

object Macros {
  object Color {
    def ColorWhite(c: Context): c.Expr[ObjectColor] = c.Expr[ObjectColor](c.universe.reify(ObjectColor(White())).tree)
  }
}

The usage looks like this:

/src/main/com/example/core/Main.scala

object Macros {
  import com.example.macros.Macros._
  def W: ObjectColor = macro Color.ColorWhite
}

object Main extends App {
  import Macros._
  println(W)
}

Scala 2.11.6. SBT 0.13.8.

What am I doing wrong?

Thanks for your advice!

by Wonko at April 27, 2015 10:02 AM

Adding to immutable HashSet

Sorry guys, I recently saw an example in "Programming in Scala", 2nd Edition on page 685, which seemed strange to me:

var hashSet: Set[C] = new collection.immutable.HashSet
hashSet += elem1

How is it possible to add something an immutable collection? I tried on REPL and it worked ok!

> scala
Welcome to Scala version 2.11.6 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_11).
Type in expressions to have them evaluated.
Type :help for more information.

scala> var s : Set[Int] = collection.immutable.HashSet()
s: Set[Int] = Set()

scala> s += 1324

scala> println(s)
Set(1324)

The stranger fact is that += operator is not defined in immutable.HashSet api page. Could anybody please help me understand what's going on?

Thanks.

by Faramarz at April 27, 2015 09:56 AM

Failed to find Spark assembly JAR on windows 8.1

I get this error "Failed to find Spark assembly JAR" when trying to run script spark-shell.cmd from the bin directory. I have downloaded a pre-built version of Spark (1.3.1 for Hadoop 2.6. and later), I have java 1.8 installed and I wish to install this on a Windows 8.1 x64 machine. I also have Scala 2.11.6 installed.

Can I get some help as to why this might be happening? What additional configuration needs to be made for me to successfully run Spark?

by Marin at April 27, 2015 09:55 AM

Fred Wilson

Using CrowdRise To Help People In Nepal

When a disaster strikes, caring people all over the world seek ways they can help. Usually that means giving funds to a global relief organization like the Red Cross. But in the age of crowdfunding, giving to relief efforts takes on an entirely new flavor. You can see that in action on our portfolio company CrowdRise’s service this morning.

nepal

Crowdfunding means you can target your giving with more granularity.

You can give to this family in the US raising money to help their relatives in Nepal, who are now living in a temporary tent and are in desperate need for help.

You can give to this campaign where CrowdRise employee Mallory is raising funds to go to Nepal and help.

You can give to this campaign that celebrates Google engineer Dan Fredinburg who was killed while climbing Mount Everest this weekend.

You can give to this campaign that benefits a local relief effort.

The Gotham Gal and I have given to all of these campaigns and I hope you will consider giving to something as well.

All of the Nepal relief efforts on CrowdRise can be seen here.

by Fred Wilson at April 27, 2015 09:48 AM

StackOverflow

Where to put config for ansible scripts whilst documenting which settings are required?

I have an ansible playbook yaml file that contains the following:

---
- vars:
    remote_application_path: /apps/application1
    local_project_path: ~/projects/application1
  hosts: Selective
  remote_user: me

I want to commit this into Git. Other users will have a different remote_user and a different local_project_path: where can I put these configuration variables whilst also making it clear to other users that they need to specifiy them?

by bcmcfc at April 27, 2015 09:46 AM

TheoryOverflow

Represent an octree as a binary tree of thrice the depth?

There already is a similar question out here, asking about the difference between octrees and kd-trees. The difference is mainly that kd-trees divide space unevenly. However, I wonder whether it would make sense to represent an octree as binary tree that at each node bisects the space exactly in the middle.

  1. Layer (root) divides along YZ-plane
  2. Layer divides along XZ-plane
  3. Layer divides along XY-plane
  4. Layer uses YZ again
  5. ...

Has the data-structure been used? Why (not)? I think this can be simpler than a conventional octree with eight pointers at each node.

by danijar at April 27, 2015 09:44 AM

/r/netsec

CompsciOverflow

What is the name for this search algorithm?

A colleague came up with the following search algorithm:

  • We have an array of sorted, distinct integers [$i_0$ < $i_2$ < ... < $i_n$]
  • We are looking for the index of $i_k$ (for simplicity suppose that $i_k$ is in the array)
  • We suppose that the elements are pretty uniformly distributed between $i_1$ and $i_n$

The algorithm:

  1. let "position" be initially be 0
  2. if $i_k$ == $i_{position}$, stop
  3. otherwise:
    • calculate distance = $i_k$ - $i_{position}$
    • if we have just switched direction (eg. the previous distance was positive and this one is negative or vice-versa), make sure that abs(distance) < abs(previous distance) by adding/subtracting 1 from distance such that it gets closer to 0. Note: if distance is already is 1 or -1, it isn't adjusted.
    • let position be position + distance (see note below for possible optimization)
    • if position is positive, let position be min(position, n) (ie. don't overshoot the end)
    • if position is negative, let position be max(position, 0) (ie. don't undershoot the beginning)

I would be interested if somebody studied this algorithm and if so, what's its name? The best I could come up with is that it's a variant of "interpolation search", however I expect the above algorithm to be faster on x86 hardware because it only uses addition/subtraction which is faster than the division/multiplication used by "standard" interpolation search.

Note:

Instead of updating position by adding the distance (ie. position = position + distance) we can use the "average distance between consecutive elements" as a scaling factor: $$position \leftarrow position + distance / {i_n-i_0 \over n+1}$$

This should give a more precise estimate about the element position.

A disadvantage of the above is that it involves a division which is slow(er) on x86/x64 platforms. Perhaps we can get away with finding the closes power of two to the scaling factor (average distance between consecutive elements) and use shift to perform the division (ie. if $2^t$ is the power of two closes to the scaling factor we can do $position \leftarrow position + distance >> k$.

Update:

  • as people have pointed out the initial element needs to be $i_0$
  • added steps to avoid oscillating indefinitely between undershooting / overshooting the target
  • discovered the "galloping search" from Timsort which seems related but not the same

by Cd-MaN at April 27, 2015 09:33 AM

Proving that non-regular languages are closed under concatenation

How can I prove that non-regular languages are closed under concatenation using only the non-regularity of $L=\{a^nb^n|n\ge1\}$ ?

by tobro at April 27, 2015 09:23 AM

TheoryOverflow

Can we not output the Kolmogorov complexity?

Let us fix a prefix-free encoding of Turing-machines and a universal Turing-machine $U$ that on input $(T,x)$ (encoded as the prefix-free code of $T$ followed by $x$) outputs whatever $T$ outputs on input $x$ (possibly both running forever). Define the Kolmogorov complexity of $x$, $K(x)$, as the length of the shortest program $p$ such that $U(p)=x$.

Is there a Turing machine $T$ such that for every input $x$ it outputs an integer $T(x)\le |x|$ that is different from the Kolmogorov complexity of $x$, i.e., $T(x)\ne K(x)$ but $\liminf_{|x|\rightarrow \infty} T(x)=\infty$?

The conditions are necessary, because

(a) if $T(x)\not \le |x|$, then it would be easy to output a number that is trivially different from $K(x)$ because it is bigger than $|x|+c_U$,

(b) if $\liminf_{|x|\rightarrow \infty} T(x)<C$ is allowed, then we can just output $0$ (or some other constant) for almost all numbers, by ``luckily'' guessing the at most one (finitely many numbers) that evaluate to $0$ (to some other constant) and output there something else. We can even guarantee $\limsup_{|x|\rightarrow \infty} T(x)=\infty$ by outputting something like $2\log n$ for $x=2^n$.

Also note that our job would be easy if we know that $T(x)$ is not surjective, but little is known about this, so the answer might depend on $U$, though I doubt it would.

I know that relations are studied a lot in general, but

Has anyone ever asked a similar question where our goal is to give an algorithm that does not output some parameter?

My motivation is this problem http://arxiv.org/abs/1302.1109.

by domotorp at April 27, 2015 09:23 AM

StackOverflow

Finding the minimal cover

am i correct to say that in this following set of functional dependency:

F={(A->B),(AB->C),(AE->C),(D->A)}

I can say that

AB->C can be reduced to A->C as since there is A->B.

by Alicia Soh at April 27, 2015 09:21 AM

Scala not found: value x when unpacking returned tuple

I've seen this kind of code on countless websites yet it doesn't seem to compile:

def foo(): (Int, Int) = {
        (1, 2)
}

def main(args: Array[String]): Unit = {
        val (I, O) = foo()
}

It fails on the marked line, reporting:

  • not found: value I
  • not found: value O

What might be the cause of this?

by saarraz1 at April 27, 2015 09:20 AM

how to check my akka dispatchers defined in config file working or not

HI I am new to akka dispatchers i took help from the akka documentation

I want to check either i tuned up the dispatcher correctly or not here is my applica.conf

include "DirectUserWriteMongoActor" 

akka {
   loggers = ["akka.event.slf4j.Slf4jLogger"]
   loglevel = "DEBUG"

}

here is my DirectUserWriteMongoActor.conf

akka {
  actor{

    ############################### Setting for a Dispatcher #####################################              
    directUserWriteMongoActor-dispatcher {
         type = Dispatcher
    executor = "fork-join-executor"
  fork-join-executor {
    parallelism-min = 2
    parallelism-factor = 2.0
    parallelism-max = 10
  }
  throughput = 10         
                  } #end default-dispatcher 
     ############################### Setting for a Router #####################################              
     deployment{
     /directUserWritwMongoActorRouter{
     router = round-robin
     nr-of-instances = 5
     }
     }#end deployment

   }  #end Actor
}  #end Akka

And here is my code

object TestActor extends App{

      val config = ConfigFactory.load().getConfig("akka.actor")

      val system = ActorSystem("TestActorSystem",config)

      val DirectUserWriteMongoActor = system.actorOf(Props[DirectUserWriteMongoActor].withDispatcher("directUserWriteMongoActor-dispatcher"), name = "directwritemongoactor")

class DirectUserWriteMongoActor extends Actor {
def receive =
{
case _ => 
}
}

when i run it the code compiles but i am wondering how do i get to know whether akka dispatcher is working or not please help

by user3801239 at April 27, 2015 09:07 AM

Cannot find the declaration of element 'beans' inside SpringContext.xml

I am invoking a spring bean from a java class, and is calling that java class from a scala program. I have packeged my program inside a jar using maven and is exploding the spring dependency inside it. But when calling the bean it is throwing the falling exception-

> User class threw exception: Job aborted due to stage failure: Task 0
> in stage 5.0 failed 4 times, most recent failure: Lost task 0.3 in
> stage 5.0 (TID 6, hostname03):
> org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException:
> Line 6 in XML document from URL
> [jar:file:/hadoop/yarn/local/usercache/root/filecache/101/SomeJar-1.0-SNAPSHOT-job.jar!/SpringContext.xml]
> is invalid; nested exception is org.xml.sax.SAXParseException;
> lineNumber: 6; columnNumber: 122; cvc-elt.1: Cannot find the
> declaration of element 'beans'. at
> org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:396)
> at
> org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:334)
> at
> org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:302)

My SpringContext.xml looks like-

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context"
    xsi:schemaLocation="
    http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
    http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd">

    <import resource="classpath:RuleSpringContext.xml" />
<bean id="serviceImpl" class="com.org.name.services.dao.ServiceImpl" /> 

    <bean id="contextInitializer" class="com.org.name.services.config.AppContextInitializer" />

</beans>

Any help will be much appreciated. Also when I am running the program from eclipse bean is getting invoked its just that when I am running my jar the issue is coming. I am using spark-submit to run the jar.

by Y0gesh Gupta at April 27, 2015 09:01 AM

zeroMQ: zmq_recv() doesn't work

I am using zeroMQ to realize the send-recv message. I use this pattern: PUB-SUB.
However, it seems that I can send some message from the publisher but I couldn't receive it from the subscriber. Here is my code:

//subscriber:

int main(int argc, char** argv){
    void * context = zmq_ctx_new();
    void * subscriber = zmq_socket(context, ZMQ_SUB);
    zmq_connect(subscriber, "tcp:://127.0.0.1:5556");
    const int SIZE = 20;
    char msg[SIZE];
    cout<<"receiving..."<<endl;
    cout<<zmq_recv(subscriber, msg, SIZE, 0)<<endl;
    cout<<"received";
    zmq_close(subscriber);
    zmq_ctx_destroy(context);
    return 0;
}

//publisher:

int main(int argc, char** argv){
    void * context = zmq_ctx_new();
    void * publisher = zmq_socket(context, ZMQ_PUB);
    zmq_bind(publisher, "tcp://127.0.0.1:5556");
    srandom((unsigned)time(NULL));
    char updateMsg[20] = "hello world";
    while(1)
    {
        cin.get();
        cout<<"sending..."<<endl;
        cout<<zmq_send(publisher, updateMsg, 20, 0)<<endl;
        cout<<"sent"<<endl;
    }
    zmq_close(publisher);
    zmq_ctx_destroy(context);
    return 0;
}

Now, I run the publisher then I run the subscriber.
Then I type "Enter" at the publisher and it says:

sending...
20
sent<l

BUT, at the subscriber, it always shows only this line: receiving...
It seems that zmq_recv() is blocked.

Could you please help me?

by Thomas at April 27, 2015 08:59 AM

akka dispatcher is not working

I am writing the setting of my disptacher in addtional conf file and then loading it in application.conf but dispatcher is not working when i am giving the full path where the dispatcher located in my file i am also assuring that it dispatcher exsits or not by using if statarements

val config = ConfigFactory.load()

      // an actor needs an ActorSystem
      val system = ActorSystem("TestActorSystem",config)
      if(system.dispatchers.hasDispatcher("akka.actor.directUserWriteMongoActor-dispatcher"))
      {println("directUserWriteMongoActor-dispatcher exists")}
      else
      {
        println("dispatcher does not exists")
      }

when i run that directUserWriteMongoActor-dispatcher exists printed on console but when i try to attached it via code

 val DirectUserWriteMongoActor = system.actorOf(Props[DirectUserWriteMongoActor].withDispatcher("akka.actor.directUserWriteMongoActor-dispatcher"), name = "directwritemongoactorr")
      DirectUserWriteMongoActor ! DirectUserWriteToMongo(directUser)

log indicate that it is using default-dispatcher rather then my own dispatcher named directUserWriteMongoActor-dispatcher here is my full code

application.conf

include "DirectUserWriteMongoActor" 

akka {
   loggers = ["akka.event.slf4j.Slf4jLogger"]
   loglevel = "DEBUG"

}

DirectUserWriteMongoActor.conf

akka {
   loggers = ["akka.event.slf4j.Slf4jLogger"]
   loglevel = "DEBUG"

  actor{
     loggers = ["akka.event.slf4j.Slf4jLogger"]
   loglevel = "DEBUG"
    ############################### Setting for a Dispatcher #####################################              
    directUserWriteMongoActor-dispatcher {
         type = Dispatcher
    executor = "fork-join-executor"
  fork-join-executor {
    parallelism-min = 2
    parallelism-factor = 2.0
    parallelism-max = 10
  }
  throughput = 10         
                  } #end default-dispatcher 

   }  #end Actor
}  #end Akka        

and here is my code

object TestActor extends App{
 val config = ConfigFactory.load()
 val system = ActorSystem("TestActorSystem",config)

      if(system.dispatchers.hasDispatcher("akka.actor.directUserWriteMongoActor-dispatcher"))
      {println("directUserWriteMongoActor-dispatcher exists")}
      else
      {
        println("directUserWriteMongoActor-dispatcher does not exists")
      }

      val DirectUserWriteMongoActor = system.actorOf(Props[DirectUserWriteMongoActor].withDispatcher("akka.actor.directUserWriteMongoActor-dispatcher"), name = "directwritemongoactorr")
      DirectUserWriteMongoActor ! DirectUserWriteToMongo(directUser)

DirectUserWriteMongoActor.scala

    case class DirectUserWriteToMongo (directuser:DirectUser) 

     class DirectUserWriteMongoActor extends Actor{

      val log = Logging(context.system, this)
     def receive = {
      case DirectUserWriteToMongo(directuser) =>
          log.debug("writing to mogo")

           log.info("message received DirectUserWriteInMongo")
           val directUserStore= new directUserStore
           log.info("going to call store in mongo")
}}

here is the output printed on console

2015-04-27 10:40:01.392 INFO  Slf4jLogger [TestActorSystem-akka.actor.default-dispatcher-2]  -Slf4jLogger started
directUserWriteMongoActor-dispatcher exists
2015-04-27 10:40:02.262 INFO  DirectUserWriteMongoActor [TestActorSystem-akka.actor.default-dispatcher-3] akka://TestActorSystem/user/directwritemongoactorr -message received DirectUserWriteInMongo
2015-04-27 10:40:02.263 INFO  DirectUserWriteMongoActor [TestActorSystem-akka.actor.default-dispatcher-3] akka://TestActorSystem/user/directwritemongoactorr -going to call store in mongo

please help me what is wrong in my code or in my conf setting disptacher was there but it is not working why os that so This should be printed

TestActorSystem-akka.actor.directUserWriteMongoActor-dispatcher-3

instead of this

TestActorSystem-akka.actor.default-dispatcher-3

please help me also i am using akka dispatch and additional conf files for the very first time

by user3801239 at April 27, 2015 08:57 AM

Play framework create DEB and deploy it on Ubuntu 14.04

I'm using Play Framework 2.3.8 with Scala, and I'm trying to create DEB package to install it on prod server. After installation it should automatically run thru "services"

I've added to build.sbt:

import com.typesafe.sbt.packager.Keys._

packageDescription := """Admin Application"""

maintainer := """Admin <contact@maintainer.com>"""

After executing

activator debian:packageBin

it generates deb file, but after installation script /etc/init.d/testApplication is not working

How can I make it working on Ubuntu 14.04?

I tried to use Java Application Archetype basing on http://www.scala-sbt.org/sbt-native-packager/archetypes/java_server/

I've added:

import com.typesafe.sbt.SbtNativePackager._
import NativePackagerKeys._

packageArchetype.java_application

But sill without success

===== Update

After setting Upstart, during installation I'm getting:

Selecting previously unselected package testApplication.
(Reading database ... 61317 files and directories currently installed.)
Preparing to unpack testApplication_0.1_all.deb ...
Unpacking testApplication (0.1) ...
Setting up testApplication (0.1) ...
Creating system group: testApplication
Adding group `testApplication' (GID 115) ...
Done.
Creating user testApplication in group testApplication
start: Unknown job: testApplication
testApplication could not be started. Try manually with service testApplication start
Processing triggers for ureadahead (0.100.0-16) ...

And running script manually still doesn't give any results

michal@cantrace:~$ sudo /etc/init.d/testApplication start
 * Starting testApplication                                  [ OK ]
michal@cantrace:~$ ps aux |grep java
michal    1807  0.0  0.0  11744   920 pts/0    S+   18:33   0:00 grep --color=auto java

by JMichal at April 27, 2015 08:55 AM

Undeadly

EU study recommends OpenBSD

In this European Parliament study: “EU should finance key open source tools” pointed out to us by Paul Irofti (pirofti@), and especially at study 2, they come to the conclusion that:
"[...] the use of open source computer operating systems and applications reduces the risk of privacy intrusion by mass surveillance. Open source software is not error free, or less prone to errors than proprietary software, the experts write. But proprietary software does not allow constant inspection and scrutiny by a large community of experts."
Read more...

April 27, 2015 08:54 AM

StackOverflow

Building tests in Intellij for Play Framework is very slow

Is there a way to speed up the build time of unit tests for Play Framework in Intellij? I am doing TDD. Whenever I execute a test, it takes about 30 - 60 seconds to compile. Even a simple Hello World test takes time. Rerunning the same test even without any change will still start the make process.

I am on Intellij 14.1, on Play 2.3.8, written in Scala.

I already tried setting the java compiler to eclipse, and also tried setting Scala compiler to SBT.

by jespeno at April 27, 2015 08:50 AM

CompsciOverflow

Constructing pushdown automata [on hold]

How do you construct a pushdown automata for this particular language.

enter image description here

by Roy Kesserwani at April 27, 2015 08:45 AM

Lobsters

StackOverflow

Play framework Json output issue

I have a simple action which outputs a json object string, like this:

Ok(toJson(Map(
  "results" -> result_lists
)))

This works all right. But if I do:

Ok(toJson(Map(
  "action" -> action_string, // a Scala String
  "results" -> result_lists  // a Scala List
)))

I got

No Json serializer found for type scala.collection.immutable.Map[String,java.io.Serializable]

compilation error...what's the problem?

by davidshen84 at April 27, 2015 08:36 AM

/r/scala

StackOverflow

Scala: Tells me a class needs to be abstract [on hold]

I'm following the Scala class on coursera and in one of the videos the following code is used:

 abstract class IntSet {
  def contains(x: Int): Boolean
  def incl(x: Int): IntSet
}

class Empty extends IntSet {
  def contais(x: Int): Boolean = false
  def incl(x: Int): IntSet = new NonEmpty(x, new Empty, new Empty)
  override def toString = "."
}

class NonEmpty(elem: Int, left: IntSet, right: IntSet) extends IntSet{
  def contains(x: Int): Boolean =
    if (x < elem) left contains x
    else if (x > elem) right contains x
    else true

  def incl(x: Int): IntSet =
    if (x < elem) new NonEmpty(elem, left incl x, right)
    else if (x > elem) new NonEmpty(elem, left, right incl x)
    else this

  override def toString = "{" + left + elem + right + "}"
}

The compiler tells me:

 "Error:(6, 8) class Empty needs to be abstract, since method contains
 in class IntSet of type (x: Int)Boolean is not defined class Empty
 extends IntSet {
       ^"

According to other posts the problem usually has to do with a mismatch in the method signature, but in this case "contains" in Empty has exactly the same signature as the one in IntSet.

by Klein at April 27, 2015 08:17 AM

Lobsters

Planet FreeBSD

CompsciOverflow

maximum no. of nodes that can be visited

Given an undirected non-weighted graph edges. '1#2', '2#3', '1#11', '3#11', '4#11', '4#5', '5#6', '5#7', '6#7', '4#12', '8#12', '9#12', '8#10', '9#10', '8#9' where for example node 1 and node 2 is having a direct edge

now the question is, which algorithm can be used to find the maximum no. of nodes that can be visited (starting from any node), visiting a node atmost once and no traversing back by any edge.

by Meherban Singh at April 27, 2015 07:41 AM

StackOverflow

Scalamock: How to get "expects" for Proxy mocks?

I am using Scalamock with ScalaTest, and am trying to mock a Java interface. I currently have:

private val _iface = mock [MyInterface]

now I want to do

_iface expects `someMethod returning "foo" once

But the compiler does not find expects.

I imported org.scalatest._ and org.scalamock.scalatest._. What else am I missing?

by rabejens at April 27, 2015 07:39 AM

Lobsters

StackOverflow

Spray scala building non blocking servlet

I've builded a scala application using spray with akka actor.

My problem is that the request are synchronized and the server can't manage many requests at once.

Is that a normal behaviour? what can I do to avoid this?

This is my boot code:

object Boot extends App with Configuration {

  // create an actor system for application
  implicit val system = ActorSystem("my-service")
//context.actorOf(RoundRobinPool(5).props(Props[TestActor]), "router")
  // create and start property service actor
  val RESTService = system.actorOf(Props[RESTServiceActor], "my-endpoint")

  // start HTTP server with property service actor as a handler
  IO(Http) ! Http.Bind(RESTService, serviceHost, servicePort)
}

actor code:

class RESTServiceActor extends Actor 
                            with RESTService  {
  implicit def actorRefFactory = context

  def receive = runRoute(rest)
}



trait RESTService extends HttpService  with SLF4JLogging{
  val myDAO = new MyDAO



  val AccessControlAllowAll = HttpHeaders.RawHeader(
    "Access-Control-Allow-Origin", "*"
  )
  val AccessControlAllowHeadersAll = HttpHeaders.RawHeader(
    "Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept"
  )
  val rest =  respondWithHeaders(AccessControlAllowAll, AccessControlAllowHeadersAll) { 
    respondWithMediaType(MediaTypes.`application/json`){
      options {
            complete {
              ""
            }
          } ~
      path("some"/"path"){
         get {
            parameter('parameter){ (parameter) => 
              ctx: RequestContext =>
                    handleRequest(ctx) {
                      myDAO.getResult(parmeter)
                    }
            }
          }
        } 
    }
  }

    /**
   * Handles an incoming request and create valid response for it.
   *
   * @param ctx         request context
   * @param successCode HTTP Status code for success
   * @param action      action to perform
   */
  protected def handleRequest(ctx: RequestContext, successCode: StatusCode = StatusCodes.OK)(action: => Either[Failure, _]) {
    action match {
      case Right(result: Object) =>
        println(result)
        ctx.complete(successCode,result.toString())
      case Left(error: Failure) =>
      case _ =>
        ctx.complete(StatusCodes.InternalServerError)
    }
  }
}

I saw that:

Akka Mist provides an excellent basis for building RESTful web services in Scala since it combines good scalability (enabled by its asynchronous, non-blocking nature) with general lightweight-ness

Is that what I'm missing? is spray using it as default or I need to add it, and how?

I'm a bit confuse about it. any help is appreciated.

by griffon vulture at April 27, 2015 07:20 AM

Nested queries with ReactiveMongo

I have an article collection where fields "title" and "publication" has a unique combined key constraint.

When calling insertOrUpdateArticle(a: Article) it will first try to insert it, in case it hits the constraint, it should update the article - if needed.

However, I'm stuck before that. Current error is:

Error:(88, 57) type mismatch;
 found   : scala.concurrent.Future[scala.concurrent.Future[Boolean]]
 required: Boolean
            col_article.find(selector).one[Article].map {

Source:

def insertOrUpdateArticle(a: Article): Future[Boolean] = {
  // try insert article
  col_article.insert[Article](a).map {
    // article inserted
    lastError => {
      println("Article added.")
      true
    }
  }.recover {
    case lastError: LastError =>
      // check if article existed
      lastError.code.get match {
        case 11000 => {
          // article existed (duplicate key error)

          // load old article
          val selector = BSONDocument(
            "title" -> BSONString(a.title),
            "publication" -> BSONString(a.publication)
          )

          col_article.find(selector).one[Article].map {
            case Some(old_a: Article) => {
              // TODO: compare a with old_a
              // TODO: if a differs from old_a, update
              Future(true)
            }
            case None => {
              // something went wrong
              Future(false)
            }
          }
        }
        case _ => {
          println("DB.insertOrUpdateArticle() unexpected error code when inserting: " + lastError.code.get)
          false
        }
      }
    case ex =>
      println("DB.insertOrUpdateArticle() unexpected exception when inserting: " + ex)
      false
  }
}

I'm unsure what to do here. The code should return Future(true) if the article was saved or updated, false otherwise. There's something with reactivemongo and/or scala futures I'm missing out here.

by Wrench at April 27, 2015 07:17 AM

/r/netsec

StackOverflow

Is there any sbt plugin can export the dependencies to a json file?

I want to export the dependencies of my sbt projects, so I can do some analysis and searching across all projects in one time.

I need a way to export the dependencies(including the dependencies of dependency) to a file in json or some other easy to parse format.

Is there any plugin or tool can do this?

I notice there is a sbt-dependency-tree plugin, which can only export to some graph format, which is not what I want

by Freewind at April 27, 2015 07:01 AM

Capitalize the first letter of every word in Scala

I know this way

val str=org.apache.commons.lang.WordUtils.capitalizeFully("is There any other WAY"))

Want to know is there any other way to do Same.

something in Scala Style

by Govind Singh Nagarkoti at April 27, 2015 07:00 AM

CompsciOverflow

Bits used for logical and physical addresses

Consider a simple paging system with a page table containing 64 entries of 11 bits(including valid/invalid bit) each, and a page size of 512 bytes.

  1. How many bits are used in a logical address?
  2. How many bits are used in a physical address?

Please explain in steps how to get the answer. I'm aware of the theories behind this question but I do not have a clear idea of how to apply them to the given scenario.

by shiran ekanayake at April 27, 2015 06:59 AM

StackOverflow

Conditional composition operator

I'd like to know if there exists some built-in function composition operator in clojure allowing me to rewrite something like:

(def first-or-identity #(if (sequential? %) (first %) (identity %)))

into a shorter:

(def first-or-identity (if-composition sequential? first identity)

--

The use case would be to be able to write something along those lines:

(def eventbus-pub
  (async/pub eventbus (if-composition sequential? first identity)))

Thanx!

by QuidNovi at April 27, 2015 06:32 AM

/r/netsec

CompsciOverflow

Recoloring bipartite graphs

Given a bipartite graph $G = (A,B,E)$ where every vertex is colored either red or blue I am trying to minimize the number of blue vertices using the following operation:

  1. Choose a vertex $v_a$ in $A$
  2. Flip the colors of $N[v_a]$, meaning that $v_a$ and every neighbor of $v_a$ will change color.

Is there a polynomial time algorithm to select a recoloring set $X \subseteq A$ that will minimize the number of blue vertices? The number of recolorings needed is not relevant.

Observe that the order of flipping does not matter, and for every vertex in $A$, you either flip it or you don't. We can think of the colors as a number which is incremented modulo 2. This yields a trivial $O(2^{|A|} \cdot n)$ algorithm.

by Davis Yoshida at April 27, 2015 06:22 AM

StackOverflow

command not found: fluentd

I am trying fluentd tutorial (http://docs.fluentd.org/articles/install-by-gem), but I cannnot progress because "fluentd command not found".

[~] sudo gem install fluentd --no-ri --no-rdoc Successfully installed fluentd-0.12.8 1 gem installed [~] which fluentd fluentd not found [~] whereis fluentd [~] fluentd --setup ./fluent zsh: command not found: fluentd

I followed preinstallation guide settings. Please help.

Mac OSX Yosemite 10.10.1 ruby 2.2.0

by user1436614 at April 27, 2015 06:14 AM

How do clojure core.async channels get cleaned up?

I'm looking at Clojure core.async for the first time, and was going through this excellent presentation by Rich Hickey: http://www.infoq.com/presentations/clojure-core-async

I had a question about the example he shows at the end of his presentation:

core.async Web Example

According to Rich, this example basically tries to get a web, video, and image result for a specific query. It tries two different sources in parallel for each of those results, and just pulls out the fastest result for each. And the entire operation can take no more than 80ms, so if we can't get e.g. an image result in 80ms, we'll just give up. The 'fastest' function creates and returns a new channel, and starts two go processes racing to retrieve a result and put it on the channel. Then we just take the first result off of the 'fastest' channel and slap it onto the c channel.

My question: what happens to these three temporary, unnamed 'fastest' channels after we take their first result? Presumably there is still a go process which is parked trying to put the second result onto the channel, but no one is listening so it never actually completes. And since the channel is never bound to anything, it doesn't seem like we have any way of doing anything with it ever again. Will the go process & channel "realize" that no one cares about their results any more and clean themselves up? Or did we essentially just "leak" three channels / go processes in this code?

by Ord at April 27, 2015 06:12 AM

/r/compsci

Want to major in Computer Science.

Hi guys, I was just wondering what you guys thought of the major, I am currently a senior in high school and have wanted to major in CS since the beginning and I just wanted to know what it really takes to get an A. I am currently in AP Calculus and I'm doing decent in it. Any info would help ;) thanks.

submitted by kakosi
[link] [5 comments]

April 27, 2015 05:54 AM

StackOverflow

play framework with mongodb

I am using mongo with play framework with "reactivemongo", That makes an async bridge between mongo connection and programm. For standalone projects I always use casbah lib - it has more native syntax (sometimes using of Futures in each request is not needed, and my religion does not allow me to use Async.await for blocking each request ) for me and no actors overhead, also I don't like JSON BSON conversion overhead.

But using casbah in play framework direct way(just create Mongo connection in controller ) produces connection leaks - this means you should create connection pool and control yourself, otherwords write reactivemongo.

Has anybody used casbah with mongo in production ? Where is the best and most canonical way of creating and controlling connection in play ecosystem ?

by Oleg Golovanov at April 27, 2015 05:13 AM

CompsciOverflow

Normalizing edge weights and the effect on Dijkstra's algorithm

If I had a graph $G$ with some negative edge weights, clearly Dijkstra's algorithm does not definitely halt, since it might get caught in a negative cycle (shedding infinite weight). However, would finding the minimum weight (most negative weight) $w$ and adding the absolute value to every edge's weight preserve the shortest path, and get rid of the possibility of a negative cycle in one fell swoop? I can't seem to find any good literature on this, which makes me think that it can't be true.

by ILoveCliques at April 27, 2015 05:13 AM

/r/compsci

Help with generating Turing Machine functions

I designed a program which creates a hash map of current state, current bit, next state, new bit, direction. Current bit is the bit you are currently looking at in an input string. Next bit is what the current bit will be replaced by. So for example given the input string 11101110000, the x+y will generate 1111110000000.... move all the ones to one side to represent a unary representation of x+y. For example here is the example of adding x+y: the left side is current state, current bit the right side is new state, new bit, and direction you move in. So for example if direction =1, you move one to the right and if direction = -1, you move to the left.

(0, 1) => (0, 1, 1) // move right over arg1

(0, 0) => (1, 0, 1) // skip to arg2

(1, 0) => (3, 0, 0) // halt at end of arg2

(1, 1) => (2, 0, -1) // erase next bit from arg2

(2, 0) => (0, 1, 1) // scooch 1 left

(2, 1) => (4, 1, 0)

I need help with generating functions (the same way as x+y) for x2

submitted by unwhi908
[link] [comment]

April 27, 2015 04:55 AM

TheoryOverflow

A trivial question on hierarchy [on hold]

According to wiki we know that $\mathsf{ACC^0\subseteq TC^0\subseteq NC^1\subseteq L\subseteq P\subseteq NP\subsetneq NEXPTIME}.$

Class $\mathsf{ACC^0}$ is included in $\mathsf{TC^0}$ is in http://en.wikipedia.org/wiki/ACC0#Computational_power.

Class $\mathsf{TC^0}$ is included in $\mathsf{NC^1}$ is in https://en.wikipedia.org/wiki/TC0.

Class $\mathsf{NC^1}$ is included in $\mathsf{L}$ which is included in $\mathsf{P}$ is in https://en.wikipedia.org/wiki/NC_(complexity)#The_NC_hierarchy.

Class $\mathsf{P}$ is included in $\mathsf{NP}$ is in https://en.wikipedia.org/wiki/P_(complexity)#Relationships_to_other_classes.

Class $\mathsf{NP\subsetneq NEXPTIME}$ is in http://en.wikipedia.org/wiki/NEXPTIME from time hierarchy theorem.

So does it mean, we already know $\mathsf{ACC^0\subsetneq NEXPTIME}$ even before Ryan Williams' breakthrough(http://en.wikipedia.org/wiki/ACC0#Computational_power)?


It seems that from discussion below(with Niel de Beaudrap, Ricky Demer) $\mathsf{ACC^0\subseteq TC^0}$ mentioned in http://en.wikipedia.org/wiki/ACC0#Computational_power is false. Could someone please clarify?

by Turbo at April 27, 2015 04:46 AM

StackOverflow

Map table with more than 22 columns to Scala case class by Slick 2.1.0

I'm using Scala 2.11, Slick 2.1.0-M2, PlayFramework 2.3.1.

I need to map 25 columns table to Scala's case class.

For example I have this case class:

case class Test(f1: Long, f2: String, f3: String, f4: String, f5: String, 
                f6: String, f7: String, f8: String, f9: String, f10: String, 
                f11: String, f12: String, f13: String, f14: String, f15: String, 
                f16: String, f17: String, f18: String, f19: String, f20: String, 
                f21: String, f22: String, f23: Float, f24: Float, f25: String)

I read that it is possible to write custom Shape (proof), but any my attempts to realize it is fails.

Please help me map this case class to table.

by Lunigorn at April 27, 2015 04:36 AM

/r/emacs

How do I enable SVG support for Emacs on OS X (Homebrew)?

I have installed emacs on OSX through homebrew using the rsvg option but still can't seem to enable SVG support when I try to use the svg-mode-line-themes.

This is what I used:

brew install emacs --HEAD --use-git-head --with-cocoa --with-gnutls --with-rsvg --with-imagemagick --with-ns 

I also did install librsvg through homebrew. What am I missing?

Thanks!

submitted by Mysticity
[link] [8 comments]

April 27, 2015 04:23 AM

Lobsters

StackOverflow

How to get default constructor parameter using reflection?

This kind of seemed easy to figure out but now am confused:

scala> class B(i:Int)
defined class B

scala> classOf[B].getDeclaredFields
res12: Array[java.lang.reflect.Field] = Array()

Note this:

scala> class C(i:Int){
     | val j = 3
     | val k = -1
     | }
defined class C

scala> classOf[C].getDeclaredFields
res15: Array[java.lang.reflect.Field] = Array(private final int C.j, private final int C.k)

by Dragonborn at April 27, 2015 04:15 AM

Scala Listener/Observer

Typically, in Java, when I've got an object who's providing some sort of notification to other objects, I'll employ the Listener/Observer pattern.

Is there a more Scala-like way to do this? Should I be using this pattern in Scala, or is there something else baked into the language I should be taking advantage of?

by Jeb at April 27, 2015 04:12 AM

Planet Emacsen

Eric James Michael Ritz: Emacs Alternatives for the pastebin Package

When discussing code with people I liked using pastebin to share snippets or files of code. I used Nic Ferrier’s grate Emacs package for sharing such things straight out of Emacs. However, pastebin changed their API not too long ago, thus breaking Ferrier’s package. He doesn’t want to fix it, and I don’t blame him. More importantly, no one should criticize a developer’s decision to walk away from an entirely voluntary project.

So that all being the case, tonight I want to share with some packages I have started to use as replacements. None of them are significantly different in functionality. More than anything they differ with regard to what sites with which they interact.

dpaste.el

This package lets you post entire buffers or regions to dpaste.com. It is simple but effective for its purpose. Note, however, that to use dpaste.el you will need to install the curl command-line application.

ix.el

Next is a package that lets you post to ix.io. Same functionality you’ll find in many of these packages.

gist.el

If you use GitHub (or not honestly) then you should be aware of ‘gists’. The gist.el package not only allows you to post new gists. It provides a full for managing the gists in your GitHub account. This includes editing existing gists, deleting them, toggling their privacy, and even removing or adding files to the gist. If you often make gists then you should find this package to be damned useful.

emacs-pastebin

This is the only package mentioned in this article which I have yet used. Therefore I cannot speak to its quality, or if it even works with pastebin’s new API. Either way, emacs-pastebin is more like gist.el than the original pastebin package.

If there are any other similar packages you like to use then please mention them by leaving a comment below. In the mean time I hope you find one or more of these packages useful.


by ericjmritz at April 27, 2015 04:10 AM

Lobsters

XKCD

StackOverflow

Scala - implicit evidence of list of tuples of lists

I am having difficulty making the implicit requirements of flatUnzip work properly. Currently it seems the first requirement that A is a Tuple2[CC1[T1], CC2[T2]] is being ignored (and hence the sanity check fails to compile). Any suggestions here? When answering, please also explain what is wrong with my current attempt.

  class MySeq[A](val _seq: Seq[A]) extends AnyVal {
    def flatUnzip[T1, T2, CC1[T1], CC2[T2]](
      implicit ev1: A =:= Tuple2[CC1[T1], CC2[T2]],
      ev2: CC1[T1] <:< TraversableOnce[T1],
      ev3: CC2[T2] <:< TraversableOnce[T2],
      cbf1: CanBuildFrom[CC1[T1], T1, CC1[T1]],
      cbf2: CanBuildFrom[CC2[T2], T2, CC2[T2]]
    ): (CC1[T1], CC2[T2]) = {
      val foo: Seq[Tuple2[CC1[T1], CC2[T2]]] = _seq // sanity check fails
      val list1 = cbf1()
      val list2 = cbf2()
      for ((xs, ys) <- _seq) {
        list1 ++= xs
        list2 ++= ys
      }
      return (list1.result, list2.result)
    }
  }

EDIT

I found that the following works, but only when the =:= is applied in the direction as shown:

  class MySeq[A](val _seq: Seq[A]) extends AnyVal {
    def mapBy[B](func: A => B): Map[B, A] = _seq.map(x => (func(x), x)).toMap
    def flatUnzip[T1, T2, CC1[T1], CC2[T2]](
      implicit
      ev1: Tuple2[CC1[T1], CC2[T2]] =:= A,
      ev2: Seq[A] =:= Seq[Tuple2[CC1[T1], CC2[T2]]],
      ev3: CC1[T1] <:< TraversableOnce[T1],
      ev4: CC2[T2] <:< TraversableOnce[T2],
      cbf1: CanBuildFrom[CC1[T1], T1, CC1[T1]],
      cbf2: CanBuildFrom[CC2[T2], T2, CC2[T2]]
    ): (CC1[T1], CC2[T2]) = {
      val list1 = cbf1()
      val list2 = cbf2()
      for ((xs, ys) <- _seq: Seq[Tuple2[CC1[T1], CC2[T2]]]) {
        list1 ++= xs
        list2 ++= ys
      }
      return (list1.result, list2.result)
    }
  }

However, replacing Seq[A] =:= Seq[Tuple2[CC1[T1], CC2[T2]]] with Seq[Tuple2[CC1[T1], CC2[T2]]] =:= Seq[A] or Tuple2[CC1[T1], CC2[T2]] =:= A with A =:= Tuple2[CC1[T1], CC2[T2]] causes problems. Can someone please explain why the order here matters, and why each of these A =:= B relationships is needed to make this work?

by Kvass at April 27, 2015 03:47 AM

Planet Theory

The Anti-Pigeonhole Conjecture


A conjecture about faculty behavior

rdkibzwang
“Dr. Kibzwang” source

Colin Potts is Vice Provost for Undergraduate Education at Georgia Tech. His job includes being a member of the President’s Cabinet—our president, not the real one—and he is charged with academic policies and changes to such policies. He is also a College of Computing colleague and fellow chess fan.

Today I want to state a conjecture about the behavior of faculty that arose when Tech tried to change a policy.

I am currently at Georgia Tech, but this conjecture applies I believe to all institutions, all faculty. Ken mostly agrees. Potts recently supplied a wonderful example of the conjecture in action—I will get to that after I formally state it. Perhaps we should call it Potts’s Conjecture?

The Conjecture

The conjecture is easy to state:

Conjecture 1 Let {X} be any issue and let {A_{1},\dots,A_{n}} be any collection of distinct faculty members. Then during a long enough period of email exchanges among the above faculty on {X} at least {n+1} opinions will be voiced.

You can see why I refer to it as an anti-pigeonhole principle. Right?

I have tried to prove the conjecture—I view it as a kind of Arrow’s Paradox. I have failed so far to obtain a formal proof of it. The conjecture does have the interesting corollary:

Corollary 2 Let {X} be any issue and let {A_{1},\dots,A_{n}} be any collection of distinct faculty members. Then during a long enough period of email exchanges on the issue {X} some faculty member {A_{i}} will voice at least two different opinions.

A weaker version that we will cleverly call The Weak Conjecture is the following:

Conjecture 3 Let {X} be any issue and let {A_{1},\dots,A_{n}} be any collection of distinct faculty members. Then during a long enough period of email exchanges on the issue {X} at least {\sqrt{n}} opinions will be voiced.

The point is that the total number of opinions is unbounded. Strong or weak, we can call it the Conjecture.

The Example

Of course, being mathematicians we want proofs not examples. But as in areas like number theory, one is often led to good conjectures by observations. In any event simple tests of conjectures are useful to see if they are plausible enough to try to prove.

Here is the policy change that has been suggested. You are free to skip this or go here for even more detail. The point is that this is the issue {X}.

Per the proposal, starting in fall 2015, classes would not meet on the Wednesday before Thanksgiving, giving students an additional day for their break. A change implemented as a pilot this spring will continue to stand, which eliminated finals being held during the last exam session on the Friday before Commencement to prevent finals overlapping with graduation festivities. Starting the next academic year, it was approved to extend the individual course withdrawal deadline by two weeks, allowing students more time to evaluate whether to drop a class.

In Spring 2016, the current Dead Week would be replaced with Final Instructional Class Days and Reading Periods. The new schedule would designate Monday and Tuesday of the penultimate week of the semester as Final Instructional Class Days, followed by a day and a half of reading period, and administering the first final on Thursday afternoon. Finals would be broken up by that weekend and resume Monday, with an additional reading period the next Tuesday morning. Finals would finish that Thursday, allowing Friday for conflict periods and a day between exams and Commencement.

{\dots}

The final recommendation would extend the length of Monday/Wednesday/Friday classes during spring and fall semesters from 50 to 55 minutes. Breaks between classes would extend from to 10 to 15 minutes.

Pretty exciting, no? No.

The result of Potts announcing the above was a storm of emails from our faculty members. As you would expect, given the Conjecture, this quickly led to a vast number of opinions. The number of opinions seems easily to follow the Conjecture.

No-Gradient Hypothesis

Ken analogizes this kind of policy tuning for a university {U} to finding a regional optimum in the landscape of a multivariable function {f_U}. A proposal like Potts’s, with so many little changed parts, resembles a step in simulated annealing where one periodically jumps out of a well to test for better conditions in another. He is not surprised that such a ‘jump’ would bring multiple reactions from faculty.

Even so, however, one would expect there to be a gradient in the new region so that opinions could converge to the bottom of the new well. This is a different matter: a helpful gradient should be in force after a jump.

April is the month when US undergraduates have been informed of all their college acceptances and in many—fortunate—cases must make a choice. Ken has a front-row seat this year. From comparing various colleges and universities with widely different policies, and noting the market incentive to diversify, he has come to a conjecture of his own:

Conjecture 4 There is no gradient: for any university {U}, {\nabla f_U} is defined only on a set of measure zero.

To all appearances, this conjecture implies the others. Is it capable of being proved? Again you—our readers—are best placed to furnish input for a proof.

Open Problems

Do you believe any of the conjectures? I hope we get lots of opinions…

Ken and I are divided: he thinks we will not get many, I think we will get a lot, and we both think that we may get just a couple. But in my opinion it is possible that …

[added to first paragraph; format fixes]


by Pip at April 27, 2015 03:43 AM

QuantOverflow

Excel to Java for Interactive brokers

I have a working excel workbook connected to Interactive brokers DDE API. I am struggling to upgrade to a more robust environment like Java. I tried to change it to ActiveX for Excel but the refreshing rate is limited and IB does not support microsoft RTD server. Can anybody suggest the easiest way to migrate my DDE workbook to Java. Since IB provides a java sample which provide the basic connection between TWS. Is it possible to feed the quote from Java API into the excel for calculation?

by Marcus at April 27, 2015 03:32 AM

StackOverflow

Using an implicit parameter in a recursive function

Consider the following hypothetical binary tree traversal code:

def visitAll(node: Node, visited: Set[Node]): Unit = {
  val newVisited = visited + node
  if (visited contains node) throw new RuntimeException("Cyclic tree definition")
  else if (node.hasLeft) visitAll(node.left, newVisited)
  else if (node.hasRight) visitAll(node.right, newVisited)
  else ()
}

I would like to reduce duplication by making the visited parameter implicit, like so:

def visitAll(node: Node)(implicit visited: Set[Node]): Unit = {
  implicit val newVisited = visited + node
  if (visited contains node) throw new RuntimeException("Cyclic tree definition")
  else if (node.hasLeft) visitAll(node.left) // newVisited passed implicitly
  else if (node.hasRight) visitAll(node.right) // newVisited passed implicitly
  else ()
}

however, this gives the following compile error:

ambiguous implicit values: both value visited of type Set[Node] and value newVisited of type scala.collection.immutable.Set[Node] match expected type Set[Node]

Is there a way I can tell the compiler to just "expect" an implicit value for the visited parameter, but not to use it as the implicit value when recursively invoking the method?

by Zoltán at April 27, 2015 03:20 AM

/r/compsci

Can somebody explain Hot-Cold Cache Replacement?

Hi, sorry if I'm in the wrong subreddit but I'm not sure where else to post.

I'm working on cache simulator. One section of my project is to implement a Fully Associative Cache with a hot-cold replacement strategy.

I've already implemented the Fully Associative Cache with LRU, but I do not understand the hot-cold thing very much, and I can't seem to find any resources on the internet. I understand it has something to do with a binary tree, but beyond that I'm lost.

Any help is appreciated. Thanks.

submitted by Planet_Of_The_Snapes
[link] [3 comments]

April 27, 2015 03:09 AM

QuantOverflow

Why aren't there any single owner companies over a billion dollars? [on hold]

The biggest companies have multiple owners which dilute the authority and finances of the company. They are either publicly traded companies via selling shares through stock markets, or privatly owned with multiple percentage of shares held by stake holders or board members. Why aren't there any billion dollar single owned companies (e.g. one founder who never sold his shares or gave up equity)?

Just curious because often times the founder(s) of a successful company, especially in tech startups have to bring their own idea to fruition from the early startup phase, often times carrying the company on their back for the first 6 months or so.

by cortell davis at April 27, 2015 03:07 AM

How to simulate stock prices with a Geometric Brownian Motion?

I want to simulate stock price paths with different stochastic processes. I started with the famous geometric brownian motion. I simulated the values with the following formula:

$$R_i=\frac{S_{i+1}-S_i}{S_i}=\mu \Delta t + \sigma \varphi \sqrt{\Delta t}$$

with:

$\mu= $ sample mean

$\sigma= $ sample volatility

$\Delta t = $ 1 (1 day)

$\varphi=$ normally distributed random number

I used a short way of simulating: Simulate normally distributed random numbers with sample mean and sample standard deviation.

Multiplicate this with the stock price, this gives the price increment.

Calculate Sum of price increment and stock price and this gives the simulated stock price value. (This methodology can be found here)

So I thought I understood this, but now I found the following formula, which is also the geometric brownian motion:

$$ S_t = S_0 \exp\left[\left(\mu - \frac{\sigma^2}{2}\right) t + \sigma W_t \right] $$

I do not understand the difference? What does the second formula says in comparison to the first? Should I have taken the second one? How should I simulate with the second formula?

by user1690846 at April 27, 2015 03:02 AM

Lobsters

CompsciOverflow

The trivial solution of perceptron

The model of perceptron minimize the sum of the distance between error classified points to the hyperplane. That is: $$\sum_{x_i\in M}\frac{-y_i(wx_i+b)}{\|w\|}$$ Where $M$ is a set of error classified points. However, it next drops $\|w\|$ and the object becomes: $$\sum_{x_i\in M}{-y_i(wx_i+b)}$$ I am wondering why, because it will cause a trivial solution that $w=0,b=0$.

Or, in practice, gradient descent will not get this trivial solution.

Thank you!

by Baisheng at April 27, 2015 02:42 AM

StackOverflow

Force unlock a mutex that was locked by a different thread

Consider the following test program:

#include <stdio.h>
#include <string.h>
#include <errno.h>
#include <strings.h>
#include <unistd.h>
#include <signal.h>
#include <pthread.h>


pthread_mutex_t mutex;
pthread_mutexattr_t mattr;
pthread_t thread1;
pthread_t thread2;
pthread_t thread3;


void mutex_force_unlock(pthread_mutex_t *mutex, pthread_mutexattr_t *mattr)
  {
    int e;
    e = pthread_mutex_destroy(mutex);
    printf("mfu: %s\n", strerror(e));
    e = pthread_mutex_init(mutex, mattr);
    printf("mfu: %s\n", strerror(e));
  }

void *thread(void *d)
  {
    int e;

    e = pthread_mutex_trylock(&mutex);
    if (e != 0)
      {
        printf("thr: %s\n", strerror(e));
        mutex_force_unlock(&mutex, &mattr);
        e = pthread_mutex_unlock(&mutex);
        printf("thr: %s\n", strerror(e));
        if (e != 0) pthread_exit(NULL);
        e = pthread_mutex_lock(&mutex);
        printf("thr: %s\n", strerror(e));
      }
    pthread_exit(NULL);
  }


void * thread_deadtest(void *d)
  {
    int e;
    e = pthread_mutex_lock(&mutex);
    printf("thr2: %s\n", strerror(e));
    e = pthread_mutex_lock(&mutex);
    printf("thr2: %s\n", strerror(e));
    pthread_exit(NULL);
  }


int main(void)
  {
    /* Setup */
    pthread_mutexattr_init(&mattr);
    pthread_mutexattr_settype(&mattr, PTHREAD_MUTEX_ERRORCHECK);
    //pthread_mutexattr_settype(&mattr, PTHREAD_MUTEX_NORMAL);
    pthread_mutex_init(&mutex, &mattr);

    /* Test */
    pthread_create(&thread1, NULL, &thread, NULL);
    pthread_join(thread1, NULL);
    if (pthread_kill(thread1, 0) != 0) printf("Thread 1 has died.\n");
    pthread_create(&thread2, NULL, &thread, NULL);
    pthread_join(thread2, NULL);
    pthread_create(&thread3, NULL, &thread_deadtest, NULL);
    pthread_join(thread3, NULL);
    return(0);
  }

Now when this program runs, I get the following output:

Thread 1 has died.
thr: Device busy
mfu: Device busy
mfu: No error: 0
thr: Operation not permitted
thr2: No error: 0
thr2: Resource deadlock avoided

Now I know this has been asked a number of times before, but is there any way to forcefully unlock a mutex? It seems the implementation will only allow the mutex to be unlocked by the thread that locked it as it seems to actively check, even with a normal mutex type.

Why am I doing this? It has to do with coding a bullet-proof network server that has the ability to recover from most errors, including ones where the thread terminates unexpectedly. At this point, I can see no way of unlocking a mutex from a thread that is different than the one that locked it. So the way that I see it is that I have a few options:

  1. Abandon the mutex and create a new one. This is the undesirable option as it creates a memory leak.
  2. Close all network ports and restart the server.
  3. Go into the kernel internals and release the mutex there bypassing the error checking.

I have asked this before but, the powers that be absolutely want this functionality and they will not take no for an answer (I've already tried), so I'm kinda stuck with this. I didn't design it this way, and I would really like to shoot the person who did, but that's not an option either.

And before someone says anything, my usage of pthread_kill is legal under POSIX...I checked.

I forgot to mention, this is FreeBSD 9.3 that we are working with.

by Daniel Rudy at April 27, 2015 02:27 AM

CompsciOverflow

hashing functions, help with proof

I am in a process of learning hashing functions and I came across this question I have to do a proof for. I am stuck and not sure where to start or how to prove it.

It's asking me to prove that probability of Pr(H(x) = j) = 1/m. I.e prove that for any value j which belongs to a "slot" in the hash table, and any key x which belongs to Universe, the probability that h(x) hashes to j is 1/m where m is the size of hash table. It's easy to see that probability of a hash function mapping to a "slot" in the hash table is 1/(size of hash table) if H is chosen uniformly at random. But how do I prove this? any pointers would be more then appreciated.

enter image description here

by yoyo burger at April 27, 2015 01:54 AM

Regular Expression for even-odd language of string

I am new to Automata theory and would to make a regular expression for "even-odd" strings, defined over $\Sigma = \{a,b\}$, which is the set of strings with even numbers of $b$'s and odd number of $a$'s. I am also interested in constructing a DFA and NFA for the language.

I tried

Even number of $b$'s = $(a^{\ast}ba^{\ast}ba^{\ast})^{\ast}$

by Khurram Ali at April 27, 2015 01:48 AM

TheoryOverflow

Maximum computational power of a C implementation

If we go by the book (or any other version of the language specification if you prefer), how much computational power can a C implementation have?

Note that “C implementation” has a technical meaning: it is a particular instantiation of the C programming language specification where implementation-defined behavior is documented. A C implementation doesn't have to be able to run on an actual computer. It does have to implement the whole language, including every object having a bit-string representation and types having an implementation-defined size.

For the purpose of this question, there is no external storage. The only input/output you may perform is getchar (to read the program input) and putchar (to write the program output). Also any program that invokes undefined behavior is invalid: a valid program must have its behavior defined by the C specification plus the implementation's description of implementation-defined behaviors listed in appendix J (for C99). Note that calling library functions that are not mentioned in the standard is undefined behavior.

My initial reaction was that a C implementation is nothing more than a finite automaton, because it has a limit on the amount of addressable memory (you can't address more than sizeof(char*) * CHAR_BIT bits of storage, since distinct memory addresses must have distinct bit patterns when stored in a byte pointer).

However I think an implementation can do more than this. As far as I can tell, the standard imposes no limit on the depth of recursion. So you can make as many recursive function calls as you like, only all but a finite number of calls must use non-addressable (register) arguments. Thus a C implementation that allows arbitrary recursion and has no limit on the number of register objects can encode deterministic pushdown automata.

Is this correct? Can you find a more powerful C implementation? Does a Turing-complete C implementation exist?

by Gilles at April 27, 2015 01:41 AM

/r/compsci

arXiv Networking and Internet Architecture

Understanding Game Theory via Wireless Power Control. (arXiv:1504.06607v1 [cs.GT])

In this lecture note, we introduce the basic concepts of game theory (GT), a branch of mathematics traditionally studied and applied in the areas of economics, political science, and biology, which has emerged in the last fifteen years as an effective framework for communications, networking, and signal processing (SP). The real catalyzer has been the blooming of all issues related to distributed networks, in which the nodes can be modeled as players in a game competing for system resources. Some relevant notions of GT are introduced by elaborating on a simple application in the context of wireless communications, notably the power control in an interference channel (IC) with two transmitters and two receivers.

by <a href="http://arxiv.org/find/cs/1/au:+Bacci_G/0/1/0/all/0/1">Giacomo Bacci</a>, <a href="http://arxiv.org/find/cs/1/au:+Sanguinetti_L/0/1/0/all/0/1">Luca Sanguinetti</a>, <a href="http://arxiv.org/find/cs/1/au:+Luise_M/0/1/0/all/0/1">Marco Luise</a> at April 27, 2015 01:30 AM

A Lex-BFS-based recognition algorithm for Robinsonian matrices. (arXiv:1504.06586v1 [cs.DM])

Robinsonian matrices arise in the classical seriation problem and play an important role in many applications where unsorted similarity (or dissimilarity) information must be reordered. We present a new polynomial time algorithm to recognize Robinsonian matrices based on a new characterization of Robinsonian matrices in terms of straight enumerations of unit interval graphs. The algorithm is simple and is based essentially on lexicographic breadth-first search (Lex-BFS), using a divide-and-conquer strategy. When applied to a nonnegative symmetric $n\times n$ matrix with $m$ nonzero entries and given as a weighted adjacency list, it runs in $O(d(n+m))$ time, where $d$ is the depth of the recursion tree, which is at most the number of distinct nonzero entries of $A$.

by <a href="http://arxiv.org/find/cs/1/au:+Laurent_M/0/1/0/all/0/1">Monique Laurent</a>, <a href="http://arxiv.org/find/cs/1/au:+Seminaroti_M/0/1/0/all/0/1">Matteo Seminaroti</a> at April 27, 2015 01:30 AM

Resource requirements and speed versus geometry of unconditionally secure physical key exchanges. (arXiv:1504.06541v1 [cs.CR])

The imperative need for unconditional secure key exchange is expounded by the increasing connectivity of networks and by the increasing number and level of sophistication of cyberattacks. Two concepts that are information theoretically secure are quantum key distribution (QKD) and Kirchoff-law-Johnson-noise (KLJN). However, these concepts require a dedicated connection between hosts in peer-to-peer (P2P) networks which can be impractical and or cost prohibitive. A practical and cost effective method is to have each host share their respective cable(s) with other hosts such that two remote hosts can realize a secure key exchange without the need of an additional cable or key exchanger. In this article we analyze the cost complexities of cable, key exchangers, and time required in the star network. We mentioned the reliability of the star network and compare it with other network geometries. We also conceived a protocol and equation for the number of secure bit exchange periods needed in a star network. We then outline other network geometries and trade-off possibilities that seem interesting to explore.

by <a href="http://arxiv.org/find/cs/1/au:+Gonzalez_E/0/1/0/all/0/1">Elias Gonzalez</a>, <a href="http://arxiv.org/find/cs/1/au:+Balog_R/0/1/0/all/0/1">Robert S. Balog</a>, <a href="http://arxiv.org/find/cs/1/au:+Kish_L/0/1/0/all/0/1">Laszlo B. Kish</a> at April 27, 2015 01:30 AM

An Automata-Theoretic Approach to the Verification of Distributed Algorithms. (arXiv:1504.06534v1 [cs.LO])

We introduce an automata-theoretic method for the verification of distributed algorithms running on ring networks. In a distributed algorithm, an arbitrary number of processes cooperate to achieve a common goal (e.g., elect a leader). Processes have unique identifiers (pids) from an infinite, totally ordered domain. An algorithm proceeds in synchronous rounds, each round allowing a process to perform a bounded sequence of actions such as send or receive a pid, store it in some register, and compare register contents wrt. the associated total order. An algorithm is supposed to be correct independently of the number of processes. To specify correctness properties, we introduce a logic that can reason about processes and pids. Referring to leader election, it may say that, at the end of an execution, each process stores the maximum pid in some dedicated register. Since the verification of distributed algorithms is undecidable, we propose an underapproximation technique, which bounds the number of rounds. This is an appealing approach, as the number of rounds needed by a distributed algorithm to conclude is often exponentially smaller than the number of processes. We provide an automata-theoretic solution, reducing model checking to emptiness for alternating two-way automata on words. Overall, we show that round-bounded verification of distributed algorithms over rings is PSPACE-complete.

by <a href="http://arxiv.org/find/cs/1/au:+Aiswarya_C/0/1/0/all/0/1">C. Aiswarya</a>, <a href="http://arxiv.org/find/cs/1/au:+Bollig_B/0/1/0/all/0/1">Benedikt Bollig</a>, <a href="http://arxiv.org/find/cs/1/au:+Gastin_P/0/1/0/all/0/1">Paul Gastin</a> at April 27, 2015 01:30 AM

Speculative Segmented Sum for Sparse Matrix-Vector Multiplication on Heterogeneous Processors. (arXiv:1504.06474v1 [cs.MS])

Sparse matrix-vector multiplication (SpMV) is a central building block for scientific software and graph applications. Recently, heterogeneous processors composed of different types of cores attracted much attention because of their flexible core configuration and high energy efficiency. In this paper, we propose a compressed sparse row (CSR) format based SpMV algorithm utilizing both types of cores in a CPU-GPU heterogeneous processor. We first speculatively execute segmented sum operations on the GPU part of a heterogeneous processor and generate a possibly incorrect results. Then the CPU part of the same chip is triggered to re-arrange the predicted partial sums for a correct resulting vector. On three heterogeneous processors from Intel, AMD and nVidia, using 20 sparse matrices as a benchmark suite, the experimental results show that our method obtains significant performance improvement over the best existing CSR-based SpMV algorithms. The source code of this work is downloadable at https://github.com/bhSPARSE/Benchmark_SpMV_using_CSR

by <a href="http://arxiv.org/find/cs/1/au:+Liu_W/0/1/0/all/0/1">Weifeng Liu</a>, <a href="http://arxiv.org/find/cs/1/au:+Vinter_B/0/1/0/all/0/1">Brian Vinter</a> at April 27, 2015 01:30 AM

On Pairwise Compatibility of Some Graph (Super)Classes. (arXiv:1504.06454v1 [cs.DM])

A graph G=(V,E) is a pairwise compatibility graph (PCG) if there exists an edge-weighted tree T and two non-negative real numbers `d' and `D' such that each leaf `u' of T is a node of V and the edge `(u,v) belongs to E' iff `d <= d_T(u, v) <= D' where d_T(u, v) is the sum of weights of the edges on the unique path from `u' to `v' in T. The main issue on these graphs consists in characterizing them.

In this note we prove the inclusion in the PCG class of threshold tolerance graphs and the non-inclusion of a number of intersection graphs, such as disk and grid intersection graphs, circular arc and tolerance graphs. The non-inclusion of some superclasses (trapezoid, permutation and rectangle intersection graphs) follows.

by <a href="http://arxiv.org/find/cs/1/au:+Calamoneri_T/0/1/0/all/0/1">Tiziana Calamoneri</a>, <a href="http://arxiv.org/find/cs/1/au:+Sinaimeri_B/0/1/0/all/0/1">Blerina Sinaimeri</a>, <a href="http://arxiv.org/find/cs/1/au:+Gastaldello_M/0/1/0/all/0/1">Mattia Gastaldello</a> at April 27, 2015 01:30 AM

A Framework for Managing Evolving Information Resources on the Data Web. (arXiv:1504.06451v1 [cs.DB])

The web of data has brought forth the need to preserve and sustain evolving information within linked datasets; however, a basic requirement of data preservation is the maintenance of the datasets' structural characteristics as well. As open data are often found using different and/or heterogeneous data models and schemata from one source to another, there is a need to reconcile these mismatches and provide common denominations of interpretation on a multitude of levels, in order to be able to preserve and manage the evolution of the generated resources. In this paper, we present a linked data approach for the preservation and archiving of open heterogeneous datasets that evolve through time, at both the structural and the semantic layer. We first propose a set of re-quirements for modelling evolving linked datasets. We then proceed on concep-tualizing a modelling framework for evolving entities and place these in a 2x2 model space that consists of the semantic and the temporal dimensions.

by <a href="http://arxiv.org/find/cs/1/au:+Meimaris_M/0/1/0/all/0/1">Marios Meimaris</a>, <a href="http://arxiv.org/find/cs/1/au:+Papastefanatos_G/0/1/0/all/0/1">George Papastefanatos</a>, <a href="http://arxiv.org/find/cs/1/au:+Pateritsas_C/0/1/0/all/0/1">Christos Pateritsas</a>, <a href="http://arxiv.org/find/cs/1/au:+Galani_T/0/1/0/all/0/1">Theodora Galani</a>, <a href="http://arxiv.org/find/cs/1/au:+Stavrakas_Y/0/1/0/all/0/1">Yannis Stavrakas</a> at April 27, 2015 01:30 AM

Testing the performance of technical trading rules in the Chinese market. (arXiv:1504.06397v1 [q-fin.TR])

Technical trading rules have a long history of being used by practitioners in financial markets. Their profitable ability and efficiency of technical trading rules are yet controversial. In this paper, we test the performance of more than seven thousands traditional technical trading rules on the Shanghai Securities Composite Index (SSCI) from May 21, 1992 through June 30, 2013 and Shanghai Shenzhen 300 Index (SHSZ 300) from April 8, 2005 through June 30, 2013 to check whether an effective trading strategy could be found by using the performance measurements based on the return and Sharpe ratio. To correct for the influence of the data-snooping effect, we adopt the Superior Predictive Ability test to evaluate if there exists a trading rule that can significantly outperform the benchmark. The result shows that for SSCI, technical trading rules offer significant profitability, while for SHSZ 300, this ability is lost. We further partition the SSCI into two sub-series and find that the efficiency of technical trading in sub-series, which have exactly the same spanning period as that of SHSZ 300, is severely weakened. By testing the trading rules on both indexes with a five-year moving window, we find that the financial bubble from 2005 to 2007 greatly improve the effectiveness of technical trading rules. This is consistent with the predictive ability of technical trading rules which appears when the market is less efficient.

by <a href="http://arxiv.org/find/q-fin/1/au:+Wang_S/0/1/0/all/0/1">Shan Wang</a> (ECUST), <a href="http://arxiv.org/find/q-fin/1/au:+Jiang_Z/0/1/0/all/0/1">Zhi-Qiang Jiang</a> (ECUST), <a href="http://arxiv.org/find/q-fin/1/au:+Li_S/0/1/0/all/0/1">Sai-Ping Li</a> (Academia Sinica), <a href="http://arxiv.org/find/q-fin/1/au:+Zhou_W/0/1/0/all/0/1">Wei-Xing Zhou</a> (ECUST) at April 27, 2015 01:30 AM

Viability of Reverse Pricing in Cellular Networks: A New Outlook on Resource Management. (arXiv:1504.06395v1 [cs.NI])

Reverse pricing has been recognized as an effective tool to handle the uncertainty of users' demands in the travel industry (e.g., airlines and hotels). To investigate its viability in cellular networks, we study the practical limitations of (operator-driven) time-dependent pricing that has been recently introduced, taking into account demand uncertainty. Then, we endeavor to design the reverse pricing mechanism to resolve the weakness of the time-dependent pricing scheme. We show that the proposed pricing scheme can achieve "triple-win" solutions: an increase in the total revenue of the operator; higher resource utilization efficiency; and an increment in the total payoff of the users. Our findings provide a new outlook on resource management, and design guidelines for adopting the reverse pricing scheme.

by <a href="http://arxiv.org/find/cs/1/au:+Jung_S/0/1/0/all/0/1">Sang Yeob Jung</a>, <a href="http://arxiv.org/find/cs/1/au:+Kim_S/0/1/0/all/0/1">Seong-Lyun Kim</a> at April 27, 2015 01:30 AM

Throughput Optimal and Fast Near-Optimal Scheduling with Heterogeneously Delayed Network-State Information (Extended Version). (arXiv:1504.06387v1 [cs.NI])

We consider the problem of distributed scheduling in wireless networks where heterogeneously delayed information about queue lengths and channel states of all links are available at all the transmitters. In an earlier work (by Reddy et al. in Queueing Systems, 2012), a throughput optimal scheduling policy (which we refer to henceforth as the R policy) for this setting was proposed. We study the R policy, and examine its two drawbacks -- (i) its huge computational complexity, and (ii) its non-optimal average per-packet queueing delay. We show that the R policy unnecessarily constrains itself to work with information that is more delayed than that afforded by the system. We propose a new policy that fully exploits the commonly available information, thereby greatly improving upon the computational complexity and the delay performance of the R policy. We show that our policy is throughput optimal. Our main contribution in this work is the design of two fast and near-throughput-optimal policies for this setting, whose explicit throughput and runtime performances we characterize analytically. While the R policy takes a few milliseconds to several tens of seconds to compute the schedule once (for varying number of links in the network), the running times of the proposed near-throughput-optimal algorithms range from a few microseconds to only a few hundred microseconds, and are thus suitable for practical implementation in networks with heterogeneously delayed information.

by <a href="http://arxiv.org/find/cs/1/au:+Narasimha_S/0/1/0/all/0/1">Srinath Narasimha</a>, <a href="http://arxiv.org/find/cs/1/au:+Kuri_J/0/1/0/all/0/1">Joy Kuri</a> at April 27, 2015 01:30 AM

On Freeze LTL with Ordered Attributes. (arXiv:1504.06355v1 [cs.LO])

This paper is concerned with Freeze LTL, a temporal logic on data words with registers. In a (multi-attributed) data word each position carries a letter from a finite alphabet and assigns a data value to a fixed, finite set of attributes. The satisfiability problem of Freeze LTL is undecidable if more than one register is available or tuples of data values can be stored and compared arbitrarily. Starting from the decidable one-register fragment we propose an extension that allows for specifying a dependency relation on attributes. This restricts in a flexible way how collections of attribute values can be stored and compared. This new conceptual dimension is orthogonal to the number of registers or the available temporal operators. The extension is strict. Admitting arbitrary dependency relations satisfiability becomes undecidable. Tree-like relations, however, induce a family of decidable fragments escalating the ordinal-indexed hierarchy of fast-growing complexity classes, a recently introduced framework for non-primitive recursive complexities. This results in completeness for the class ${\bf F}_{\epsilon_0}$. We employ nested counter systems and show that they relate to the hierarchy in terms of the nesting depth.

by <a href="http://arxiv.org/find/cs/1/au:+Decker_N/0/1/0/all/0/1">Normann Decker</a>, <a href="http://arxiv.org/find/cs/1/au:+Thoma_D/0/1/0/all/0/1">Daniel Thoma</a> at April 27, 2015 01:30 AM

Strategic Teaching and Learning in Games. (arXiv:1504.06341v1 [cs.GT])

It is known that there are uncoupled learning heuristics leading to Nash equilibrium in all finite games. Why should players use such learning heuristics and where could they come from? We show that there is no uncoupled learning heuristic leading to Nash equilibrium in all finite games that a player has an incentive to adopt, that would be evolutionary stable or that could "learn itself". Rather, a player has an incentive to strategically teach such a learning opponent in order secure at least the Stackelberg leader payoff. The impossibility result remains intact when restricted to the classes of generic games, two-player games, potential games, games with strategic complements or 2x2 games, in which learning is known to be "nice". More generally, it also applies to uncoupled learning heuristics leading to correlated equilibria, rationalizable outcomes, iterated admissible outcomes, or minimal curb sets. A possibility result restricted to "strategically trivial" games fails if some generic games outside this class are considered as well.

by <a href="http://arxiv.org/find/cs/1/au:+Schipper_B/0/1/0/all/0/1">Burkhard C. Schipper</a> at April 27, 2015 01:30 AM

CompsciOverflow

kth nearest vertex in a unweighted graph

Given an unweighted undirected graph $G$ with $10^5$ vertices and a subset $S$ of special vertices and an integer $k$, I want to find the $k$th nearest special vertex for each vertex. What algorithm can I use for this problem?

I'm actually thinking of algorithm for finding shortest paths from every vertex to all other vertices (like Floyd-Warshall algo, but in our case graph is unweighted and we need much better performance).

by milos at April 27, 2015 01:29 AM

StackOverflow

proprocessors in C++ and their alternatives in Scala

Which is the alternative implementation in Scala of preprocessor directives, like in C++? Let say that I have something like this:

 #ifdef ADD
 class Add extends Expr {
   Expr left , right ;
   Add ( Expr l, Expr r)
     { left =l; right =r; }
 #ifdef EVAL
   double eval () {
     return left.eval () +
       right.eval ();
   }
 #endif
 #ifdef PRINT
   void print () {
       left.print ();
         System.out.print("+");
         right.print();
   }
 #endif
 }
 #endif

How I can have an equivalent of this in Scala?

by Valerin at April 27, 2015 01:27 AM

/r/clojure

/r/emacs

I want to ditch MS word, and fully adopt LaTeX. Any advice

I hear that emacs is perhaps the best text editor for LaTeX and really want to adopt this to my work flow. Is there any advice on how to set it up. I would like something that could upload a simple template, similar to TeXShop. ( using a mac computer, not sure if important to point out)

submitted by guti495
[link] [41 comments]

April 27, 2015 01:14 AM

StackOverflow

Fork and Copy on write file systems zfs btrfs

I am trying to figure out if the following scenario is possible, and would like suggestions as to how I can go about it.

I would like to fork process P1, and create process P2. The process P2 should not impact the state of process P1 in any way. The stack/local variables, and heap memory is normally copied-on-write by the call to fork(), hence that should not be a problem. However, any changes made to the disk by P2 will be visible to P1 as well.

I am looking for a solution to create the new cloned process P2, which instead of running on the original filesystem, is able to run on a cloned filesystem. I assume that such a functionality could be possibly created using ZFS/BTRFS or other similar file systems?

  • > The end result should be that any write operations done by P2, should not impact process P1. I understand that any stack variables in that process are managed anyhow. However, any operations being done on the disk by P2, can potentially impact P1, which is what I am trying to avoid <-

I wonder if anyone with experience in these can give suggestions?

by tsar2512 at April 27, 2015 01:12 AM

/r/compsci

Planet Theory

Searching for a Compressed Polyline with a Minimum Number of Vertices

Authors: Alexander Gribov
Download: PDF
Abstract: There are many practical applications that require simplification of polylines. Some of the goals are to reduce the amount of information necessary to store, improve processing time, or simplify editing. The simplification is usually done by removing some of the vertices, making the resultant polyline go through a subset of the source polyline vertices. However, such approaches do not necessarily produce a new polyline with the minimum number of vertices. The approximate solution to find a polyline, within a specified tolerance, with the minimum number of vertices is described in this paper.

April 27, 2015 12:48 AM

Recent Advances in Real Geometric Reasoning

Authors: James H. Davenport, Matthew England
Download: PDF
Abstract: In the 1930s Tarski showed that real quantifier elimination was possible, and in 1975 Collins gave a remotely practicable method, albeit with doubly-exponential complexity, which was later shown to be inherent. We discuss some of the recent major advances in Collins method: such as an alternative approach based on passing via the complexes, and advances which come closer to "solving the question asked" rather than "solving all problems to do with these polynomials".

April 27, 2015 12:48 AM

A Characterization of Visibility Graphs for Pseudo-Polygons

Authors: Matt Gibson, Erik Krohn, Qing Wang
Download: PDF
Abstract: In this paper, we give a characterization of the visibility graphs of pseudo-polygons. We first identify some key combinatorial properties of pseudo-polygons, and we then give a set of five necessary conditions based off our identified properties. We then prove that these necessary conditions are also sufficient via a reduction to a characterization of vertex-edge visibility graphs given by O'Rourke and Streinu.

April 27, 2015 12:44 AM

Approximate Fitting of Circular Arcs when Two Points are Known

Authors: Alexander Gribov
Download: PDF
Abstract: The task of approximation of points with circular arcs is performed in many applications, such as polyline compression, noise filtering, and feature recognition. However, development of algorithms that perform a significant amount of circular arcs fitting require an efficient way of fitting circular arcs with complexity O(1). The elegant solution to this task based on an eigenvector problem for a square nonsymmetrical matrix is described in [1]. For the compression algorithm described in [2], it is necessary to solve this task when two points on the arc are known. This paper describes a different approach to efficiently fitting the arcs and solves the task when one or two points are known.

April 27, 2015 12:44 AM

Straight-line Drawability of a Planar Graph Plus an Edge

Authors: P. Eades, S. H. Hong, G. Liotta, N. Katoh, S. H. Poon
Download: PDF
Abstract: We investigate straight-line drawings of topological graphs that consist of a planar graph plus one edge, also called almost-planar graphs. We present a characterization of such graphs that admit a straight-line drawing. The characterization enables a linear-time testing algorithm to determine whether an almost-planar graph admits a straight-line drawing, and a linear-time drawing algorithm that constructs such a drawing, if it exists. We also show that some almost-planar graphs require exponential area for a straight-line drawing.

April 27, 2015 12:44 AM

On the Runtime of Randomized Local Search and Simple Evolutionary Algorithms for Dynamic Makespan Scheduling

Authors: Frank Neumann, Carsten Witt
Download: PDF
Abstract: Evolutionary algorithms have been frequently used for dynamic optimization problems. With this paper, we contribute to the theoretical understanding of this research area. We present the first computational complexity analysis of evolutionary algorithms for a dynamic variant of a classical combinatorial optimization problem, namely makespan scheduling. We study the model of a strong adversary which is allowed to change one job at regular intervals. Furthermore, we investigate the setting of random changes. Our results show that randomized local search and a simple evolutionary algorithm are very effective in dynamically tracking changes made to the problem instance.

April 27, 2015 12:43 AM

Run Generation Revisited: What Goes Up May or May Not Come Down

Authors: Michael A. Bender, Samuel McCauley, Andrew McGregor, Shikha Singh, Hoa T. Vu Stony Brook University, University of Massachusetts, Amherst)
Download: PDF
Abstract: In this paper, we revisit the classic problem of run generation. Run generation is the first phase of external-memory sorting, where the objective is to scan through the data, reorder elements using a small buffer of size M , and output runs (contiguously sorted chunks of elements) that are as long as possible.

We develop algorithms for minimizing the total number of runs (or equivalently, maximizing the average run length) when the runs are allowed to be sorted or reverse sorted. We study the problem in the online setting, both with and without resource augmentation, and in the offline setting.

(1) We analyze alternating-up-down replacement selection (runs alternate between sorted and reverse sorted), which was studied by Knuth as far back as 1963. We show that this simple policy is asymptotically optimal. Specifically, we show that alternating-up-down replacement selection is 2-competitive and no deterministic online algorithm can perform better.

(2) We give online algorithms having smaller competitive ratios with resource augmentation. Specifically, we exhibit a deterministic algorithm that, when given a buffer of size 4M , is able to match or beat any optimal algorithm having a buffer of size M . Furthermore, we present a randomized online algorithm which is 7/4-competitive when given a buffer twice that of the optimal.

(3) We demonstrate that performance can also be improved with a small amount of foresight. We give an algorithm, which is 3/2-competitive, with foreknowledge of the next 3M elements of the input stream. For the extreme case where all future elements are known, we design a PTAS for computing the optimal strategy a run generation algorithm must follow.

(4) Finally, we present algorithms tailored for nearly sorted inputs which are guaranteed to have optimal solutions with sufficiently long runs.

April 27, 2015 12:43 AM

Approximation Algorithms for Inventory Problems with Submodular or Routing Costs

Authors: Viswanath Nagarajan, Cong Shi
Download: PDF
Abstract: We consider the following two deterministic inventory optimization problems over a finite planning horizon $T$ with non-stationary demands.

(a) Submodular Joint Replenishment Problem: This involves multiple item types and a single retailer who faces demands. In each time step, any subset of item-types can be ordered incurring a joint ordering cost which is submodular. Moreover, items can be held in inventory while incurring a holding cost. The objective is find a sequence of orders that satisfies all demands and minimizes the total ordering and holding costs.

(b) Inventory Routing Problem: This involves a single depot that stocks items, and multiple retailer locations facing demands. In each time step, any subset of locations can be visited using a vehicle originating from the depot. There is also cost incurred for holding items at any retailer. The objective here is to satisfy all demands while minimizing the sum of routing and holding costs.

We present a unified approach that yields $\mathcal{O}\left(\frac{\log T}{\log\log T}\right)$-factor approximation algorithms for both problems when the holding costs are polynomial functions. A special case is the classic linear holding cost model, wherein this is the first sub-logarithmic approximation ratio for either problem.

April 27, 2015 12:43 AM

A Simple and Fast Linear-Time Algorithm for Proportional Apportionment

Authors: Sebastian Wild, Raphael Reitzig
Download: PDF
Abstract: Cheng and Eppstein (ISAAC, 2014) describe a linear-time algorithm for computing highest-average allocations in proportional apportionment scenarios, for instance assigning seats to parties in parliament so that the distribution of seats resembles the vote tally as well as possible.

We propose another linear-time algorithm that consists of only one call to a rank selection algorithm after elementary preprocessing. Our algorithm is conceptually simpler and faster in practice than the one by Cheng and Eppstein.

April 27, 2015 12:42 AM

Sampling Correctors

Authors: Clément Canonne, Themis Gouleakis, Ronitt Rubinfeld
Download: PDF
Abstract: In many situations, sample data is obtained from a noisy or imperfect source. In order to address such corruptions, this paper introduces the concept of a sampling corrector. Such algorithms use structure that the distribution is purported to have, in order to allow one to make "on-the-fly" corrections to samples drawn from probability distributions. These algorithms then act as filters between the noisy data and the end user.

We show connections between sampling correctors, distribution learning algorithms, and distribution property testing algorithms. We show that these connections can be utilized to expand the applicability of known distribution learning and property testing algorithms as well as to achieve improved algorithms for those tasks.

As a first step, we show how to design sampling correctors using proper learning algorithms. We then focus on the question of whether algorithms for sampling correctors can be more efficient in terms of sample complexity than learning algorithms for the analogous families of distributions. When correcting monotonicity, we show that this is indeed the case when also granted query access to the cumulative distribution function. We also obtain sampling correctors for monotonicity without this stronger type of access, provided that the distribution be originally very close to monotone (namely, at a distance $O(1/\log^2 n)$). In addition to that, we consider a restricted error model that aims at capturing "missing data" corruptions. In this model, we show that distributions that are close to monotone have sampling correctors that are significantly more efficient than achievable by the learning approach.

We also consider the question of whether an additional source of independent random bits is required by sampling correctors to implement the correction process.

April 27, 2015 12:42 AM

The Minimum Wiener Connector

Authors: Natali Ruchansky, Francesco Bonchi, David Garcia-Soriano, Francesco Gullo, Nicolas Kourtellis
Download: PDF
Abstract: The Wiener index of a graph is the sum of all pairwise shortest-path distances between its vertices. In this paper we study the novel problem of finding a minimum Wiener connector: given a connected graph $G=(V,E)$ and a set $Q\subseteq V$ of query vertices, find a subgraph of $G$ that connects all query vertices and has minimum Wiener index.

We show that The Minimum Wiener Connector admits a polynomial-time (albeit impractical) exact algorithm for the special case where the number of query vertices is bounded. We show that in general the problem is NP-hard, and has no PTAS unless $\mathbf{P} = \mathbf{NP}$. Our main contribution is a constant-factor approximation algorithm running in time $\widetilde{O}(|Q||E|)$.

A thorough experimentation on a large variety of real-world graphs confirms that our method returns smaller and denser solutions than other methods, and does so by adding to the query set $Q$ a small number of important vertices (i.e., vertices with high centrality).

April 27, 2015 12:41 AM

Overview of Swallow --- A Scalable 480-core System for Investigating the Performance and Energy Efficiency of Many-core Applications and Operating Systems

Authors: Simon J. Hollis, Steve Kerrison
Download: PDF
Abstract: We present Swallow, a scalable many-core architecture, with a current configuration of 480 x 32-bit processors.

Swallow is an open-source architecture, designed from the ground up to deliver scalable increases in usable computational power to allow experimentation with many-core applications and the operating systems that support them.

Scalability is enabled by the creation of a tile-able system with a low-latency interconnect, featuring an attractive communication-to-computation ratio and the use of a distributed memory configuration.

We analyse the energy and computational and communication performances of Swallow. The system provides 240GIPS with each core consuming 71--193mW, dependent on workload. Power consumption per instruction is lower than almost all systems of comparable scale.

We also show how the use of a distributed operating system (nOS) allows the easy creation of scalable software to exploit Swallow's potential. Finally, we show two use case studies: modelling neurons and the overlay of shared memory on a distributed memory system.

April 27, 2015 12:41 AM

The Range of Topological Effects on Communication

Authors: Arkadev Chattopadhyay, Atri Rudra
Download: PDF
Abstract: We continue the study of communication cost of computing functions when inputs are distributed among $k$ processors, each of which is located at one vertex of a network/graph called a terminal. Every other node of the network also has a processor, with no input. The communication is point-to-point and the cost is the total number of bits exchanged by the protocol, in the worst case, on all edges.

Chattopadhyay, Radhakrishnan and Rudra (FOCS'14) recently initiated a study of the effect of topology of the network on the total communication cost using tools from $L_1$ embeddings. Their techniques provided tight bounds for simple functions like Element-Distinctness (ED), which depend on the 1-median of the graph. This work addresses two other kinds of natural functions. We show that for a large class of natural functions like Set-Disjointness the communication cost is essentially $n$ times the cost of the optimal Steiner tree connecting the terminals. Further, we show for natural composed functions like $\text{ED} \circ \text{XOR}$ and $\text{XOR} \circ \text{ED}$, the naive protocols suggested by their definition is optimal for general networks. Interestingly, the bounds for these functions depend on more involved topological parameters that are a combination of Steiner tree and 1-median costs.

To obtain our results, we use some new tools in addition to ones used in Chattopadhyay et. al. These include (i) viewing the communication constraints via a linear program; (ii) using tools from the theory of tree embeddings to prove topology sensitive direct sum results that handle the case of composed functions and (iii) representing the communication constraints of certain problems as a family of collection of multiway cuts, where each multiway cut simulates the hardness of computing the function on the star topology.

April 27, 2015 12:41 AM

Modal Inclusion Logic: Being Lax is Simpler than Being Strict

Authors: Lauri Hella, Antti Kuusisto, Arne Meier, Heribert Vollmer
Download: PDF
Abstract: We investigate the computational complexity of the satisfiability problem of modal inclusion logic. We distinguish two variants of the problem: one for strict and another one for lax semantics. The complexity of the lax version turns out to be complete for EXPTIME, whereas with strict semantics, the problem becomes NEXPTIME-complete.

April 27, 2015 12:40 AM

CompsciOverflow

What additional expressivity does polyvariance give in pushdown CFA?

I'm reading through Pushdown Control-Flow Analysis of Higher-Order Programs, which presents a synthesis of the Abstracting Abstract Machines technique and pushdown automata to get static analysis which perfectly matches call and return sites. The paper presents a monovariant and two polyvariant forms of the system.

I can't seem to get my head around what additional expressive power polyvariance (e.g. 1CFA) grants when return-flow merging is already eliminated in the monovariant case by the pushdown methods.

Could someone provide an example program in which polyvariance helps?

by Alex R at April 27, 2015 12:37 AM

StackOverflow

Generating a link to a controller action in Play Framework 2.3

I'm working on a Play application and need to generate links in a mixed Scala-HTML view that call controller actions. I found this question from a couple years ago that's similar to my situation, but the provided answers don't work for me.

The elements are generated in a loop so I can't manually insert the argument to the controller action, but nothing I've tried has worked. This is the line I have now:

ID: @{var fhirID = <processing for ID>; <a href='@routes.Users.fhirUserDetails(fhirID)'>fhirID</a>}

The accepted answer to the question I linked earlier effectively uses this structure too:

<a href='@routes.Application.show("some")'>My link with some string</a>

My issue here is twofold:

1) How can I have the variable fhirID passed to the controller action? My generated link simply has the text "fhirID" instead of what's generated by the first part of the statement.

2) Is the @routes.Users syntax correct? When I click the generated link, it literally attempts to render a page at /myapp/@routes.Users.fhirUserDetails(fhirID)

I realize I'm probably missing something very basic here- thanks for any advice!

by dkhaupt at April 27, 2015 12:35 AM

TheoryOverflow

Topological sort with alternative choices of predecessors

I have a family of directed graphs over the same set of nodes $V$ defined as follows.

Each node $v \in V$ has $k_v$ alternative choices for its set of predecessors. In other words, I am given a relation $\rho\colon V \to 2^{V}$ such that $|\rho(v)| = k_v$ (where $\rho(v) = \{S \subset V \mid (v, S) \in \rho\}$, I'm abusing the relation notation a little bit). This relation induces a family of graphs: if we pick one predecessor set $S_v \in \rho(v)$ for each $v$, we obtain a fixed set of edges $E = \{ (u, v) \mid u \in S_v \}$ for the entire graph.

There are $\prod\limits_{v \in V} k_v$ such possible graphs, some of them are acyclic, some are not. I want to find at least one acyclic graph among these possibilities, and sort it topologically. Can it be done in polynomial time (i.e., faster than enumerating all possible combinations of choices explicitly)?

(As an illustrative application, imagine a package manager where some packages depend on either one of these other packages, because they all provide similar capabilities.)

by Skiminok at April 27, 2015 12:15 AM

/r/netsec

QuantOverflow

What is a Basis Swap Curve?

I know what a Swap Curve is. But I don't understand what a Basis Swap Curve is and how it is constructed?

Need some guidance on this.

by lakesh at April 27, 2015 12:07 AM

CompsciOverflow

T(n/3) + log(n)

how do you find the Theta of this problem... $$T(n) = T(\frac{n}{3}) + \log_2(n)$$ I end up getting a pattern of $$T(\frac{n}{3^{k}}) + \log_2(\frac{n}{3^{k-1}}) + \log_2(\frac{n}{3^{k-2}}) + ... + \log_2(n)$$ when I solve for k with T(1) = 1 I get this... $$\frac{n}{3^{k}} = 1 \\ n = 3^{k} \\ \ln n = k \ln 3 \\ k = \log_3(n)$$ then I plug into the original problem and get this... $$\log_2(\frac{n}{3^{\log_3 n}}) +log_2(\frac{n}{3^{\log_3 n - 1}}) + ... + \log_2(n)$$ I'm not sure how to proceed from this point. Any suggestions?

by MD_90 at April 27, 2015 12:04 AM

/r/compsci

Does anyone know of any good charting applications that do not require programming?

I'd like to find a good app that will create nice looking charts (preferably with animations). However the person using it does not having any programming experience. Any suggestions?

Thanks.

Edit: http://www.quora.com/Are-there-any-good-charting-visualization-tools-that-dont-involve-programming

submitted by garyOakCantIgnore
[link] [6 comments]

April 27, 2015 12:02 AM

Planet Clojure

Polymorphic performance

There was a question recently on the Clojure mailing list about when to use multimethods vs protocols vs case vs cond:

There are situations where I want to dispatch functions using based on their certain properties. I can also use case statements instead but it looks more coupled and more change is required if I want to add new types.

What I want to ask is if I need to avoid using multi-methods for performance reasons? I read somewhere that they are not really fast but the posts were old and the performance might have been improved in between. Should I favor case and cond branches instead of defmulti when I need performance?

I responded there with some guidance but wanted to flesh that out a bit more and add some numbers. There are a couple of axes of interest here:

  • Open vs closed - are you ok with code that specifies a concrete set of choices and requires modification to add new cases? Or do you want an open system that can be extended without modifying the existing code. Multimethods and protocols are open, case and cond are closed.
  • Type vs value - do you want to dispatch based on type of a single argument or based on values or types of multiple arguments? Are the values you are choosing between constants or expressions that require evaluation?

General guidelines:

  • open extension and type-based dispatch => protocols
  • open extension and value-based dispatch => multimethods
  • closed constant choices => case
  • closed expressions => cond

I created a repo with some perf tests in it for comparison purposes - run them with lein run if you like. The numbers recorded here were created on a Macbook Pro with Java 1.8 and either Clojure 1.6 or 1.7.0-beta2 as specified.

Value-based dispatch performance

If we look first at value-based dispatch, I compared using case vs cond vs multimethods for this. The implementation of case is actually more complicated than you might expect, going to great lengths to leverage a table-based constant-time lookup. This constant-time is only available if the values to compare are compile-time constants, so you cannot use arbitrary expressions. The example being tested:

(defn value-case [n]
  (case n
    1 "1"
    2 "2"
    3 "3"
    4 "4"
    5 "5"))

This is being compared with cond, which evaluates a series of conditions until a match is found (on average, linear time not constant time):

(defn value-cond [n]
  (cond
    (= n 1) "1"
    (= n 2) "2"
    (= n 3) "3"
    (= n 4) "4"
    (= n 5) "5"))

And we can do something similar with multimethods:

(defmulti value-multi identity)
(defmethod value-multi 1 [n] "1")
(defmethod value-multi 2 [n] "2")
(defmethod value-multi 3 [n] "3")
(defmethod value-multi 4 [n] "4")
(defmethod value-multi 5 [n] "5")

Multimethods first evaluate the dispatch function (here, identity), then perform a linear search for a value match, and finally invoke the actual implementation method. The linear search is optimized with a cache from dispatch value to method.

Here is a comparison of performance when called with 1 and 5 in these cases, all times in ns:

Expression 1.6.0 1.7.0-beta2
(value-case 1) 21 17
(value-case 5) 26 18
(value-cond 1) 4 7
(value-cond 5) 90 74
(value-multi 1) 41 44
(value-multi 5) 47 40

Because both case and multimethods use a cache, we see approximately the same time for both the 1st and 5th case. Multimethods require the invocation of two functions and is about twice as long. cond however is a linear search through the conditions - if the first case happens to be the one that’s hit, this is the fastest option! But if multiple expressions need to be checked, the worst case here is twice as slow as multimethods.

As is commonly the case, if you have prior knowledge about your data, you may be able to leverage that information to optimize your code. If one value tends to dominate in your use case, a quick cond (or if) check that catches it before hitting a broader set of use cases could be a big win. Combining closed (for better performance of known cases) and open (for extensibility) approaches is a useful technique.

There are of course lots more variations that could be tested here - depending on the complexity of the dispatch function, condition expressions, weightings of different use cases, etc any of these options (or a combination) might be the best for you. To know for sure, measure your own use case! But hopefully this helps to build a mental model of the options.

I did not expect these to vary much between 1.6 and 1.7.0-beta2 but several changes seem to have combined to improve the performance in most of these cases too, so I included that in case anyone was curious.

Type-based dispatch performance

Turning to type-based dispatch, it makes the most sense to consider protocols vs multimethods, which have differing capabilities but overlap in the most common scenario of dispatching based on the type of the first argument to the function. Protocols maximally leverage the type-based dispatch built into the JVM.

Here I benchmarked protocols:

(defprotocol TypeProto
  (type-proto [_]))

(extend-protocol TypeProto
  String
  (type-proto [_] "string")
  Long
  (type-proto [_] "long")
  Object
  (type-proto [_] "default"))

versus the same effective call for multimethods:

(defmulti type-multi class)
(defmethod type-multi String [x] "string")
(defmethod type-multi Long [x] "long")
(defmethod type-multi :default [x] "default")

I threw the default case in there to demonstrate how dramatic the improvement is now that this case is being cached in 1.7 (see this previous post for details).

Here’s the comparison of a single call, the default case, and invoking with alternating types (all times in ns):

Expression 1.6.0 1.7.0-beta2
(type-multi “abc”) 41 40
(type-multi 1/2) 12051 40
(do (type-multi “abc”) (type-multi 5)) 85 79
(type-proto “abc”) 7 7
(type-proto 1/2) 8 9
(do (type-proto “abc”) (type-proto 5)) 30 25

A few things to notice there. Obviously, caching the default case makes a dramatic performance difference. Second, protocols are about 5x faster than multimethod calls for this particular case of type-based dispatch. You can find older estimates of 100x or more for this difference but I think the 5x difference is a good rule of thumb for modern JVM and Clojure versions.

Generally, this difference in performance between protocols and multimethods is unlikely to be the biggest factor in the performance of your application, but I think protocols are the better default choice for this case.

Finally, it’s also interesting to look at how things change as we go from a monomorphic call (single type) to a bimorphic call (alternating types). Since the alternating “abc”/5 case is doing twice as many calls, we expect it to be about twice as slow and indeed the multimethod one is right there. Interestingly though the protocol case gets slower by 3-4x - we are hurting the ability of the JIT to optimize this case and killing some of the performance gains that protocols are leveraging. It’s still significantly faster than multimethods though. If we pushed this further to a megamorphic (>2 cases) call, you would see further loss in optimization.

For a more detailed discussion on using multimethods and protocols for polymorphism, check out the discussion in chapter 1 of my book with Ben Vandgrift, Clojure Applied, now in beta.

by Inside Clojure at April 27, 2015 12:00 AM

Planet Emacsen

Endless Parentheses: Debug your Emacs init file with the Bug-Hunter

“With great power comes great responsibility,” and Emacs is a prime example of that. The versatility of having an editor that’s a lisp interpreter is truly empowering, but it can also backfire on you in the most unexpected ways. If you’ve ever ran into a foggy incompatibility issue between two unrelated packages, that manifested itself by turning on your mother’s coffee machine every other weekday, then you know how difficult this can be to track down.

One recurring theme on Emacs.StackExchange is that users will come to us with some random arcane issue, and either provide no more information or dump their init file on the question. The best answer we can give in these cases is for them to bisect their init file. But if that’s always the case, why not automate it?

The Bug Hunter is an Emacs library that does that for you.

Hunting real errors

If there’s an error being thrown during initialization, all it takes is a single command.

M-x bug-hunter-init-file RET RET

The Bug-Hunter will do a bisection search in your init file for the source of the error. Thanks to the magic powers of bisection, it is surprisingly fast even on huge init files.

Hunting unexpected behaviour

If no actual error is being thrown, but some behaviour is still clearly wrong, then it’s a little more tricky: you need to come up with an assertion. That is, you need a snippet of Emacs-Lisp code that will return t if something is wrong and nil if all is fine.

For instance, I wanted to figure out why the cl library was being loaded even though I didn’t explicitly require it anywhere. In this case, the snippet (featurep 'cl) gives me what I need. It returns nil before the library is loaded, and returns t afterwards.

M-x bug-hunter-init-file RET (featurep 'cl) RET

cl-example.png

Interactive debugging

Unfortunately this is not supported yet. Communicating with a background Emacs process that is not in batch mode is complicated.

Usually, though, you shouldn’t need it. There’s almost always a snippet that will work for your needs. If your problem is not triggering an error, and you don’t know enough Elisp to write an assertion, let me know about your problem and maybe I can write one for you.

Even better, shoot us a question over at Emacs.StackExchange. We’re always glad to help.

Comment on this.

by Artur Malabarba at April 27, 2015 12:00 AM

HN Daily

Planet Clojure

Stack Overflow

Let's play a word association game. When I say "stack overflow" what comes to mind? Is it Spolsky and Atwood's popular question and answer site? Do you imagine a toppling tower of buttermilk pancakes oozing with real Vermont maple syrup? Or do you think about the common catastrophic programming bug? Stack overflows are a real problem in a lot of code. Let's dig in a little bit to understand how and why they happen.

First we need to understand what the stack is and how it is used. A stack is a last-in-first-out data structure. You can 'put' something on top of it and 'pop' off the thing on top of the stack. Want to access something in the middle? Forget about it. You must keep popping off the top until you get there. Think of it like a stack of those delicous pancakes. If you want to add another pancake, the only place you can put it is on the top of the stack. Trying to pull a pancake from the middle of the stack would likely leave you with syrup-soaked pants, so when you take a pancake you must take it from the top.

Most every programming language uses a stack data structure to keep track of program execution. It's the foundation for how function calls work. As the CPU (or virtual CPU) executes your function, it builds up a local context. This context contains the local variables in the function and the pointer to the line of code currently being executed. What happens, then, when the CPU hits a function call? It needs to jump to a new part of the code and start executing in a new context, but the context of the current function still needs to be there when the new function returns. What happens in that other function should not effect the local variable in the current one.

Enter the program stack. Each thread of execution has a stack. Before jumping to the new function, the compiler inserts instructions to push all of the current context onto the this stack. The local variables and everything else that gives that function its context get stacked on top. The last thing pushed is the pointer that tells the CPU where to jump back to after the function call is finished. The paramaters that are being passed to the function go on the stack, as well as the return value from the function.

As functions call more functions, the stack piles taller and taller. You've seen this stack before and probably didn't know it. When you see an exception that dumps out a big stack trace, you're just seeing a view of all of the function calls piled up on that stack. If you are in a debugger and you look at the call trace, what you are seeing is the debuggers view of the stack.

So if that's a program stack, how and why does the stack overflow? In most programs the stack grows and contracts as the execution of the program goes deeply into nested function calls, and then unwinds those calls. This is not problematic for most programs.

What happens, though, if recursion is introduced into the program? A function calls itself, which may in turn call itself again. This is not problematic so long as the recursion has some reasonable bound - a condition that causes the recursion to stop, then unwind. Even if the bound is quite high, modern computers have such large memory capacities that enough stack space can be allocated to accommodate the program.

Where programs run into trouble is when the recursion is not bounded. Consider this snippet of Swift code from a game that takes user input.

    func move(board: Board) {
        printer.printBoard(board)
        printer.print("Please choose a space on the board: ")
        let spot = reader.getInt()
        if spot != nil && spot! >= 0 && spot! < BoardConstants.SIZE && contains(board.getOpenSpots(), spot!) {
            board.dropToken(spot!, token: token)
        } else {
            move(board)
        }
    }

If the player does not enter a valid 'spot', this function calls itself to get another move. Each improper input puts another layer of context on top of the stack. Pancake after pancake piles high as the player inputs more and more wrong entries. Because this recursion is not bounded, we've given the player the opportunity to overflow our stack. The stack overflows when it takes up more space than has been allocated to it. In the case of the Swift code above, it took about 20,000 wrong inputs, but eventually the program crashed with a stack overflow error. The size of the stack exceeded the allocated stack space and 'overflowed' into adjacent memory.

Here is another example. This is from a clojure web app. This function calculates a sequence of days between two dates.

(defn collect-days [current end days]
  (cons current
    (if (= current end)
      days
      (collect-days (time/plus current (time/days 1)) end days))))

This function may look safe and bounded, but consider what happens if end is actually before current! The function will keep recursing deeper and deeper. No mater how many days you add to current it will never be equal to end. This unbounded recursion will eventually overflow the stack.

The moral of the story? Don't allow user input to cause unbounded recursion. Use a loop instead. Loops stay in the context of the function with internal jumping and don't build up on the stack.

Having seen this mistake many times, I have also encountered some exceptions to this rule. The Rust language has a really interesting implementation of the Stack. Rust actually dynamically allocates new chunks of Stack when you fill up what's been allocated. Built-in to the language is the ability to jump to a new section of stack and keep going. A Rust program like this won't crash. What a user can do is cause the program to use insane amounts of memory. Eventually, your system will run out of physical memory and begin swapping to virtual memory on the disk, which slows the system to an eventually unusable state.

The other exception is a language like Clojure, which has tail recursion optimization. Instead of calling the function by name, you can use the 'recur' keyword. The recur call must be in the 'tail' position, meaning it is the last thing executed in the function. This allows the compiler to turn that recursion into something more like a loop under the hood and prevent the stack from growing. Tail-optimized recursion is an acceptable alternative to a loop. If you are using the language this way, you'd better make darn sure that you've put the rercusion in the tail position. Clojure enforces this, but not all languages do.

Now that we've got a handle on how to control stack growth by avoiding unbounded recursion, I'm really hungry for those pancakes. Who's with me?

Photo Credit: Tavallai

by Doug Bradbury at April 27, 2015 12:00 AM

April 26, 2015

TheoryOverflow

The ouput of Kruskal's algorithm is a minimum spanning tree [on hold]

I want to show that the output of Kruskal's algorithm is a spanning tree.

We suppose that S is the output of Krskal's algorithm. To show that S is a spanning tree we have to show that S is connected and acyclic.

How can we justify that S is connected?

S is acyclic because each time the algorithm checks if the edge to be added forms a cycle. Is this justification that S is acyclic correct?

Moreover, I want to show that the output spanning tree S of Kruskal's algorithm is a minimum spanning tree, so it is of minimum weight, by contradiction.

We suppose that S is not a minimum spanning tree.

Let T be a spanning tree which has the minimum weight.

How can I go on to get a contradiction ?

by user159870 at April 26, 2015 11:56 PM

Lobsters

CompsciOverflow

Master Theorem applied to recurrence relations [duplicate]

This question already has an answer here:

Can anyone explain how to use the master theorem to the following problem... $$T(n) = T(\frac{n}{3}) + \log(n)$$

by MD_90 at April 26, 2015 11:54 PM

StackOverflow

What is the proper way to insert/update a BLOB with Play 2.3?

I manage a Play server that has a MySQL database. One of its tables has a BLOB column. In play 2.2, before any of the explicit ParameterValue business introduced by 2.3, I was able to read/write just by injecting an Array[Byte] into my query like so:

val foo: Array[Byte] = ???  // Doesn't matter.
SQL("update my_table set the_blob = {foo} where id = {id}").on('foo -> foo, 'id -> id).executeUpdate()

This no longer works. It will complain at compile time with:

type mismatch;                                                                  
  found   : (Symbol, Array[Byte])
  required: anorm.NamedParameter

It seemed Anorm doesn't know how to convert a Array[Byte], so in my folly I wrote:

// Now everything will work perfectly and I can get back to my day.
implicit def byteArrayToParameter(ba: Array[Byte]): ParameterValue = {
  ba
}

At first I didn't find any problems, but eventually I noticed that any attempt to write to the table with the BLOB would:

  • Hang the browser.
  • Cause Play's java threads to hog any CPU cores they could find.
  • Never complete the write.

Much debugging brought me back to the implicit function above. Logging messages showed me this conversion was being called over and over in an infinite loop.

Question: How does one handle writing BLOBs properly with Play 2.3?

(or more generally)

Question: How does one provide proper conversion instances for types that can't automatically be converted to a ParameterValue?

Thank you.

by fosskers at April 26, 2015 11:46 PM

TheoryOverflow

Data for testing graph algorithms

I am looking for a source of huge data sets to test some graph algorithm implemention. Please also provide some information about the type/distribution (e.g. directed/undirected, simple/not simple, weighted/unweighted) of the graphs in the source if they are known.

by Chris at April 26, 2015 11:40 PM

/r/clojure

StackOverflow

How to use swift flatMap to filter out optionals from an array

I'm a little confused around flatMap (added to Swift 1.2)

Say I have an array of some optional type e.g.

let possibles:[Int?] = [nil, 1, 2, 3, nil, nil, 4, 5]

In Swift 1.1 I'd do a filter followed by a map like this:

let filtermap = possibles.filter({ return $0 != nil }).map({ return $0! })
// filtermap = [1, 2, 3, 4, 5]

I've been trying to do this using flatMap a couple ways:

var flatmap1 = possibles.flatMap({
    return $0 == nil ? [] : [$0!]
})

and

var flatmap2:[Int] = possibles.flatMap({
    if let exercise = $0 { return [exercise] }
    return []
})

I prefer the last approach (because I don't have to do a forced unwrap $0!... I'm terrified for these and avoid them at all costs) except that I need to specify the Array type.

Is there an alternative away that figures out the type by context, but doesn't have the forced unwrap?

by MathewS at April 26, 2015 11:05 PM

TheoryOverflow

Good algorithms to solve ATSP

What are some good neighborhood-based local search algorithms or strategies to solve the Asymmetric TSP ? I see many 2-OPT and K-opt based algorithms (e.g. Lin-Kernighan implementations), but I think that these algorithms are more time consuming since the evaluation complexity is not constant (assuming my calculations are correct).

What are some good alternative that are less time consuming? are their any technique swapping nodes for example? Could you suggest a good paper about such algorithms?

by yafrani at April 26, 2015 10:53 PM

StackOverflow

How to convert if statements in Scala to a case match statement block

I am in the process of learning how to code in Scala by translating from older Java code. The following code is an attempt in this learning process:

I created a class as below. Now, can the if statements in this code be replaced by case match block ? I am specifically looking to do a case match with a fall through default case that handles an exceptional case. How can I accomplish this?

class MyTest {
//ResEmail is a case class
//MultipartEnityBuilder is from the httpmime-4.4.jar

    def makeResponse(resEmail: ResEmail): HttpEntity = {
            val maker = MultipartEntityBuilder.create()
            maker.addTextBody("api_user", this.apikey)
            maker.addTextBody("api_key", this.apivalue)
            val tos = resEmail.toList
            val tonames = resEmail.toNameList
            val ccs = resEmail.ccS

            if (tos.length == 0) {
              builder.addTextBody(stringToFormattedString(ARG_TO), respEmail.fromEmail, ContentType.create("text/plain", "UTF-8"))
            }

            else if(tos.length > 0) {
              for (i <- 0 until tos.length) 
                maker.addTextBody(PARAM_TO.format(PARAM_TO, i), tos(i), ContentType.create("text/plain", "UTF-8"))
            }

            if (resEmail.fromEmail != null && !resEmail.fromEmail.isEmpty) maker.addTextBody(PARAM_FROM, resEmail.fromEmail, 
              ContentType.create("text/plain", "UTF-8"))

            if (resEmail.fromName != null && !resEmail.fromName.isEmpty) maker.addTextBody(PARAM_FROMNAME, 
              respEmail.fromName, ContentType.create("text/plain", "UTF-8"))

            if (respEmail.replyToEmail != null && !resEmail.replyToEmail.isEmpty) maker.addTextBody(PARAM_REPLYTO, 
              respEmail.replyToEmail, ContentType.create("text/plain", "UTF-8"))

            if (resEmail.subject != null && !resEmail.subject.isEmpty) maker.addTextBody(PARAM_SUBJECT, 
              resEmail.subject, ContentType.create("text/plain", "UTF-8"))

            val tmpString = new MyExpApi().jsonString()
            if (tmpString != "{}") maker.addTextBody(PARAM_MYSMTPAPI, tmpString, ContentType.create("text/plain", 
              "UTF-8"))
            maker.build()
          }


   }// end of class MyTest

by user3825558 at April 26, 2015 10:51 PM

Lobsters

Planet Clojure

Rarely Reversible

React.js and its “IMGUI” inspired rendering model is exploding in popularity. After my talk at Clojure/West, several folks asked me about two seemingly separate discussion topics: Two-way Bindings and Cursors.

Among those asking me about data binding, there are a few campaigning for a “spreadsheet-like” method of defining user interfaces. On one hand, they’re right to seek a direct-manipulation experience for building artifacts in a fundamentally visual and interactive medium. However, the code-level approach of using spreadsheet-like “equations” is fundamentally flawed.

Meanwhile, folks using Om in practice have expressed their frustrations with cursors. A few suggested using something more principled like Haskell’s lenses. While this may be useful in some situations, it’s fundamentally flawed for the same underlying reasons.

Both designs share a flaw born of a common desire: To automatically map user input back to data sources. When there’s a 1-to-1 mapping from data sources to user interfaces, this is appropriate. However, it’s not sufficient for the general case. In fact, it’s not sufficient for the common case.

Transformations of source data in to views beyond trivial editable fields is almost never reversible or equational.

To substantiate my position, let me demonstrate just how difficult it is to write reversible relationships with some examples in the domain of grammars.

Let’s say you have this simple grammar:

1
A ::= a+

The production A is formed of 1 or more a characters.

An infinite number of substitutions will satisfy the rule.

1
2
a -> A
aaaaaa -> A

Here, the direction of the arrow is critically important.

How can you reverse this production rule?

1
??? <- A

The only universal way to convert directed rules in to reversible relations is to never discard “information”.

For example, we could parameterize A by a number.

1
aaaa <-> A(4)

Alternatively, you can specify a general principal for lossy reversal rules, like “always produce the smallest value that would satisfy the rule”.

1
a <- A

However, this falls flat on its face if you introduce alternation:

1
2
3
4
5
6
AB ::= a | b

a -> AB
b -> AB

??? <- AB

Neither a nor b is “smaller”, so you need a new principle to reverse by. In a traditional context free grammar, | doesn’t imply order, but in a PEG, for example, you have ordered choice. You could say that you reverse to the first choice which satisfies the rule. Let’s use / as the ordered choice operator.

1
2
3
4
5
6
AB ::= a / b

a -> AB
b -> AB

a <- AB

Even in this simplified context, we haven’t even begun to scratch the surface of potential problems with reversibility. Constructs such as binders or loops will cause the reversal principles to explode in complexity.

Grammars offer pretty simple and well defined operations. However, practically every new operation introduces new reversal principles. Once real world business logic get involved, things get hairy quickly.

In the context of user interfaces, these problems are magnified to be so large that they are practically unrecognizable. However, if you zoom in on individual cases in your own applications, you’ll spot this inherit complexity.

A robust solution should not attempt to build upon a foundation of automatic reversibility.

by Brandon Bloom at April 26, 2015 10:33 PM

/r/freebsd

FreeBSD10 and VirtualBox

Hey guys, I am trying to run 1 to 2 VM at a time on my Intel i5 -quad core on FreeBSD. I am noticing on my i3status bar, I hit 4.00+ cpu load and 50c degrees for my cpu tmp. I was also installing scapy at the time in another window, and I had my browser open with serveral pages. What I want to know, is how do I know when I am doing too much, what is the optimum load I should put on my Quad core - 8gb ram ? What would I research to find this out? Thanks for your time :)

submitted by Mr-Free
[link] [5 comments]

April 26, 2015 10:19 PM

DataTau

StackOverflow

How to convert string array to int array in scala

I am very new to Scala, and I am not sure how this is done. I have googled it with no luck. let us assume the code is:

var arr = readLine().split(" ")

Now arr is a string array. Assuming I know that the line I input is a series of numbers e.g. 1 2 3 4, I want to convert arr to an Int (or int) array.

I know that I can convert individual elements with .toInt, but I want to convert the whole array.

Thank you and apologies if the question is dumb.

by Mohammad Gharehyazie at April 26, 2015 09:38 PM

QuantOverflow

Calculate CVaR for a portfolio

I would like to calculate the Conditional Value at Risk for a portfolio. To be honest, I'm trying for a few days to find an example to calculate for an entire portfolio, not just for one security and I really have a hard time understanding. All the examples are for a single security. I need to add that I'm at the beginning of learning econometrics/statistics.

Let's say that I have a portfolio composed from 3 investments. If I want to calculate CVaR using Monte Carlo prices from the 3 investments, here is what I'm thinking: 1. create a simulated portfolio of 3 investments and take into the account the nominal value of every security and the direction (long/short). 2. run the above portfolio through Monte Carlo for n times and generate a distribution of P/L by calculating the difference between start and end of NAV for every MC iteration. 3. let's say I want to calculate at 99% confidence ratio. Then I get the mean of 1% of the worst losses resulted from the Monte Carlo distribution.

I would like to emphasize that I don't want to use a normal distribution.

Are the steps above correct for finding the CVaR of a portfolio? Also, does respect the coherent risk measure? Thank you.

by Martin Fuller at April 26, 2015 09:23 PM

StackOverflow

Multi-user-dungeon game item collection count - Racket/ Sceme

I am in the process of designing a small MUD game using racket. In my game, gems which are collected by the player are used to bribe guards. Currently if the player possesses more than 1 gem, the game will not let them bribe a guard.

Here is the relevant code:

;; This code processes the bribe command.
((equal? cmd 'bribe)
 (if (get-area-item rid 'guard)
     (begin
       (if (eq? (user-attribute 'gem) 1)
           (begin
             (hash-table-set! areasdb rid (replace-area-item (get-area rid) 'guard '(guard #f)))
             (user-add-value 'gem -1))
           (describe 'bribe-no-gem))
       (describe 'bribe))
     (describe 'bribe-blank)))

Here is my code in its entirety:

#lang racket

;; Oscar Moore - UWL - Functional Programming, Element 2.

;; This section of code must be imported to use any hash tables present.
(require (lib "69.ss" "srfi"))
;; Here is the area descriptions assosications list, it contains the descriptions and number of each area within the game.
(define descriptions '( (1 "You are in the Artifact Bank entrance.\n\nPlease enter one of the following available commands:\n")
                        (2 "You enter the Artifact Bank Lobby.\n\nPlease enter one of the following available commands:\n")
                        (3 "You are in the East Lobby wing.")
                        (4 "You walk onto an underground path.\n")
                        (5 "You are in the corridor.\n")
                        (6 "You are in the storage room.")
                        (7 "You crawl into a small air vent.\n")
                        (8 "You are in the staff room.\n")
                        (9 "You are in the bathroom\n")
                        (10 "You are in the South Lobby Wing.\n")
                        (11 "Mountain\n")
                        (12 "East Road.\n")
                        (13 "You are in a house.\n")
                        (14 "North River.\n")
                        (15 "Desert Road.\n")
                        (16 "East Cottage.\n")
                        (17 "Castle Doorway.\n")
                        (18 "Tunnel Doorway.\n")
                        (19 "Tunnel.\n")
                        (20 "Cave.\n")
                        (21 "Tomb Vault Entrance.\n")
                        (22 "Tomb Vault Corridor.\n")
                        (23 "Tomb Vault.\n")
                        (24 "You are now inside the sacred tomb vault, glistening in-front of you is the Merlin Amulet.\nYou pick up the Amulet and escape the ancient artifact bank.\n\nCongratulations, game completed!")))

;; Here are the directions for each of the areas present within the Amulet Hunter game.
(define directions '( (1 (north 5) (south 0) (west 0) (east 2))
                      (2 (north 0) (south 0) (west 1) (east 3))
                      (3 (north 7) (south 0) (west 2) (east 4))
                      (4 (north 0) (south 0) (west 3) (east 0))
                      (5 (north 9) (south 1) (west 0) (east 0))
                      (6 (north 10) (south 0) (west 0) (east 0))
                      (7 (north 11) (south 3) (west 0) (east 0))
                      (8 (north 12) (south 0) (west 0) (east 0))
                      (9 (north 0) (south 5) (west 0) (east 0))
                      (10 (north 14) (south 6) (west 0) (east 0))
                      (11 (north 15) (south 7) (west 0) (east 0))
                      (12 (north 16) (south 8) (west 0) (east 0))
                      (13 (north 17) (south 0) (west 0) (east 0))
                      (14 (north 18) (south 10) (west 0) (east 0))
                      (15 (north 0) (south 11) (west 0) (east 16))
                      (16 (north 20) (south 12) (west 15) (east 0))
                      (17 (north 21) (south 13) (west 0) (east 18))
                      (18 (north 0) (south 14) (west 17) (east 19))
                      (19 (north 0) (south 0) (west 18) (east 20))
                      (20 (north 0) (south 16) (west 19) (east 0))
                      (21 (north 0) (south 17) (west 0) (east 22))
                      (22 (north 0) (south 0) (west 21) (east 23))
                      (23 (north 0) (south 0) (west 22) (east 24))
                      (24 (north 0) (south 0) (west 23) (east 0))))

;; Here are the various properties for each area. As you can see, areas can contain several items which are interactible with the user.
;;The availability of items can be changed through the use of #t and #f.
(define areas-list '( (1 (locked #f) (key #f) (gem #f) (guard #f) (food #f) (end #f))
                      (2 (locked #f) (key #f) (gem #t) (guard #f) (food #f) (end #f))
                      (3 (locked #f) (key #f) (gem #f) (guard #t) (food #f) (end #f))
                      (4 (locked #f) (key #f) (gem #f) (guard #f) (food #t) (end #f))
                      (5 (locked #f) (key #f) (gem #t) (guard #f) (food #f) (end #f))
                      (6 (locked #f) (key #t) (gem #f) (guard #f) (food #f) (end #f))
                      (7 (locked #t) (key #f) (gem #f) (guard #f) (food #f) (end #f))
                      (8 (locked #f) (key #t) (gem #f) (guard #f) (food #f) (end #f))
                      (9 (locked #f) (key #t) (gem #f) (guard #f) (food #f) (end #f))
                      (10 (locked #f) (key #f) (gem #f) (guard #f) (food #f) (end #f))
                      (11 (locked #f) (key #f) (gem #f) (guard #t) (food #f) (end #f))
                      (12 (locked #f) (key #f) (gem #f) (guard #f) (food #f) (end #f))
                      (13 (locked #f) (key #f) (gem #f) (guard #f) (food #f) (end #f))
                      (14 (locked #f) (key #f) (gem #f) (guard #t) (food #f) (end #f))
                      (15 (locked #f) (key #f) (gem #f) (guard #f) (food #f) (end #f))
                      (16 (locked #f) (key #f) (gem #f) (guard #f) (food #f) (end #f))
                      (17 (locked #f) (key #f) (gem #f) (guard #f) (food #f) (end #f))
                      (18 (locked #f) (key #f) (gem #f) (guard #t) (food #f) (end #f))
                      (19 (locked #f) (key #f) (gem #f) (guard #f) (food #f) (end #f))
                      (20 (locked #t) (key #f) (gem #f) (guard #f) (food #f) (end #f))
                      (21 (locked #t) (key #f) (gem #f) (guard #f) (food #f) (end #f))
                      (22 (locked #f) (key #f) (gem #f) (guard #t) (food #f) (end #f))
                      (23 (locked #f) (key #f) (gem #f) (guard #f) (food #f) (end #f))
                      (24 (locked #f) (key #f) (gem #f) (guard #f) (food #f) (end #t))))

;; This line of code is what loads the areas-list into the areas hash table.
(define areasdb (make-hash-table))

;; The next two lines of code loop through every item present in the areas-list and inpit them into the hash table.
(for-each (lambda (x) 
            (hash-table-set! areasdb (car x) (cdr x))) areas-list)

;; Here are the descriptions for the various areas present in this game. 
;;Also seen is the name of the description which allows it to be called.
(define game-texts '((welcome "\n■-------------------------------------------------------------------------■\n\n                    **  Welcome to Amulet Hunter!  **\n\n                 ▪-------------------------------------▪\n\nYou have broken into an ancient artifacts bank. You must safely acquire\nthe mystical Merlin Amulet, which has been locked away for generations.\n\nGood luck, the hunt for the mythical Merlin Amulet awaits you...") 
                     (gem "There is a gem hidden in this area, type 'take' to take it.\n\nPlease enter one of the following available commands:\n")
                     (no-gem "\nYou do not have any gems to bribe the guard with!\n")
                     (locked "\nThis door is currently locked, type 'open' to open it\n")
                     (key "\nThere is a key on the floor in this area, type 'pick' to pick it up\n")
                     (guard "\nWarning! You have been spotted by a guard, type 'bribe' to bribe the \nguard with your gems.\n\nPlease enter one of the following available commands:\n")
                     (food "\nThere is a piece of food left on a table in this area, type 'store' to store it\n")
                     (guard-nogem "\nYou do not have a gem to bribe the guard with!\n")
                     (no-key "\nYou do not have a key to open this door!\n")
                     (lock-open "\nYou put the key into the lock and turn, the door unlocks...\n")
                     (pick "\nYou pick up the key.\n")
                     (pick-error "\nThere is no key in this area to pick up!\n")
                     (take "\nGem taken.")
                     (take-error "\nThere is no gem in this area to take!\n")
                     (bribe "\nYou search yourself for a gem...")
                     (bribe-no-gem "\nYou have no gems to bribe the guard with!")
                     (bribe-no-blank "")
                     (bribe-error "\nThere is no guard present to bribe!\n")
                     (store "\nYou store the food for later.\n")
                     (store-error "\nThere is no food in this area to store!\n")
                     (open "\nYou insert the key and turn, the door opens.\n")
                     (open-error "\nThis door is not currently locked!\n")
                     (heal-no-food "\nYou do not have any food to eat!\n")
                     (heal "\nYou eat some stored food, your health has been replenished.\n")
                     (health-full "\nYour health is already full, you realise you arent hungry.\n")
                     (guard-warning "\nThe guard blocks your escape, maybe you should try a bribe...\n")
                     (guard-attack "\nYou cant escape! \nThe guard strikes you with his baton and you lose 20% of your health!")
                     (user-died "\n                             **  Game Over! **\n\nYou are dead.\n\n■-------------------------------------------------------------------------■\n")
                     (you-won "\n          **  Congratulations, you have acquired the Merlin Amulet!  **\n■-------------------------------------------------------------------------■\n")))

;; This section of code allows the creation of a new hash table for the user attributes.
;; Also visible is the inventory of items which the user can store.
(define user (make-hash-table))
(hash-table-set! user 0 '(1 (key 0)(gem 0) (food 0) (health 100)))

;; This is where the user attributes are updated by the given key and new value.
(define (user-update key value)
  (hash-table-set! user 0 (cons '1 (replace-area-item  (cdr (hash-table-ref user 0)) key value))))

;; Here is where the attribute of the user by the given key is gotten.
(define (user-attribute key)
  (car (assq-ref (cdr (hash-table-ref user 0)) key)))

;; This is where the values of a user attributes are increased.
(define (user-add-value key value)
  (let ((new-value (+ (user-attribute key) value)))
    (user-update key (list key new-value))))

;; This section of code prints out the Inventory to the user. Contained within the inventory is the items which the user can store.
;; Added for asthetic purposes is a border. This improves clarity to the user.
(define (user-status)
  (printf "        ◤---------------------------------------------------------◥        \n                             *  Inventory  *\n")
  (printf "            ")
  (for-each (lambda (x) (printf "~a: ~a       " (car x) (cadr x))) (cdr (hash-table-ref user 0)))
  (printf "\n        ◣---------------------------------------------------------◢        \n\n"))

;; This section of code gets the lists within the association list by its id.
(define (assq-ref assqlist id)
  (cdr (assq id assqlist)))

;; This section of code gets an attribute in the given association list by its id.
(define (lookup data area-id attribute)
  (car (assq-ref (assq-ref data area-id) attribute)))

;; This section of code gets the area within the association list by its id.
(define (get-area-description rid)
  (car (assq-ref descriptions rid)))

;; This section of code returns the appropriate text description of the item in the areasdb.
(define (describe item)
  (printf "~a\n" (car (assq-ref game-texts item))))

;; This section of code is what gets the available commands.
(define (get-available-directions rid)
  (let ((direction (assq-ref directions rid)))
    (map car (filter (lambda (x) (> (second x) 0)) direction))))

;; Here is where the available commands in the given area are printed to the console.
(define (print-available-commands rid)
  (for-each (lambda (x) (printf "'~a'      " x)) (append (get-available-directions rid) (get-available-commands rid))) 
  (printf "\n"))

;; This next section of code checks the areasdb hash table and returns the value of an attribute within the areasdb.
(define (get-area-item id field)
  (if (hash-table-exists? areasdb id)
      (let ((record (hash-table-ref areasdb id)))
        (if (memq field '(locked key gem guard food end))
            (cadr (assq field record))
            "Sorry. wrong field type."))
      "Sorry. There is no such area."))

;; Here is where areasdb is checked, and the list with the given area id is returned.
(define (get-area id)
  (if (hash-table-exists? areasdb id)
      (hash-table-ref areasdb id)
      "Sorry. There is no such area."))

;; Here is where all the available items within the given area id are returned.
(define (get-available-items rid)
  (let ((direction (get-area rid)))
    (map car (filter (lambda (x) (eq? (second x) #t)) direction))))

;; This code is where locked doors are displayed within the given area.
(define (get-locked-area rid)
  (filter 
   (lambda (x) (get-area-item x 'locked)) 
   (map (lambda (x) (lookup directions rid x ))  (get-available-directions rid))))

;; This section of code returns the list of commands which are available to the user, depending on the areasdb attributes.
(define (get-available-commands rid)
  (let ((l (map map-item-command (get-available-items rid))))
    (cons 'heal (if ( > (length (get-locked-area rid)) 0)
                    (cons 'open l)        
                    l))))

; This section of code returns a command depending on the item within the area.
(define (map-item-command item)
  (if (eq? item 'locked) 
      'open
      (if (eq? item 'key)
          'pick
          (if (eq? item 'gem)
              'take
              (if (eq? item 'food)
                  'store
                  (when( eq? item 'guard)
                    'bribe))))))

;; Here is the where the given command is processed within the appropriate area. The areas hash table is also updated, if required.
(define (process-command cmd rid)
  (cond     
    ;; This allows the opening of the door.
    ((equal? cmd 'open)
     (if (> (length (get-locked-area rid)) 0)
         (if (> (user-attribute 'key) 0)
             (begin           
               (let ((target-area (car (get-locked-area rid))))
                 (hash-table-set! areasdb target-area (replace-area-item (get-area target-area) 'locked '(locked #f)))
                 (user-add-value 'key -1)
                 (describe 'open)))
             (describe 'no-key))
         (describe 'open-error)))

    ;; This allows the user to pick up keys.
    ((equal? cmd 'pick)
     (if (get-area-item rid 'key)
         (begin
           (hash-table-set! areasdb rid (replace-area-item (get-area rid) 'key '(key #f)))
           (user-add-value 'key 1)
           (describe 'pick))
         (describe 'pick-error)))

    ;; This allows the user to take gems.
    ((equal? cmd 'take)
     (if (get-area-item rid 'gem)
         (begin
           (hash-table-set! areasdb rid (replace-area-item (get-area rid) 'gem '(gem #f)))
           (user-add-value 'gem +1)
           (describe 'take))         
         (describe 'take-error)))

    ;; This code processes the bribe command.
    ((equal? cmd 'bribe)
     (if (get-area-item rid 'guard)
         (begin
           (if (eq? (user-attribute 'gem) 1)
               (begin
                 (hash-table-set! areasdb rid (replace-area-item (get-area rid) 'guard '(guard #f)))
                 (user-add-value 'gem -1))
               (describe 'bribe-no-gem))
           (describe 'bribe))
         (describe 'bribe-blank)))

    ;; This allows food to be stored for health purposes.
    ((equal? cmd 'store)
     (if (get-area-item rid 'food)
         (begin           
           (hash-table-set! areasdb rid (replace-area-item (get-area rid) 'food '(food #f)))
           (user-add-value 'food 1)
           (describe 'store))
         (describe 'store-error)))

    ;; This allows the user to heal themselves when required.
    ((equal? cmd 'heal)
     (if (> (user-attribute 'food) 0)
         (if(< (user-attribute 'health) 100)
            (begin           
              (user-add-value 'food -1)
              (user-update 'health '(health 100))
              (describe 'heal))
            (describe 'health-full))
         (describe 'heal-no-food)))

    ;; This section of code is for error purposes. It notifies the user if an invalid command has been entered.
    (else
     (printf "Invalid command. Please try again."))))

;; This section of code returns a new list by replacing the item with a new value in the appropriate list.
(define (replace-area-item list item new-value)
  (cond
    ((null? list) (quote()))
    (else (cond
            ((eq? (caar list) item)
             (cons new-value (cdr list)))
            (else (cons (car list)
                        (replace-area-item (cdr list) item new-value)))))))  

;; This is where the game is started from the given area.
(define (startgame area-id)
  (describe 'welcome)
  (let loop ((rid area-id) (echo #t))
    (printf "\n")

    ;; Here is where the user items are printed.
    (user-status)    

    ;; This section of code processes the reached destination.
    (when (get-area-item rid 'end)
      (begin
        (describe 'you-won)
        (exit)))

    ;; This section of code allows the area descriptions and available items ot be printed.
    (when echo
      (printf "~a\n" (get-area-description rid))
      (map (lambda (x) (describe x)) (get-available-items rid)))

    ;; This section of code processes the outcome of the users health reaching 0.
    ;; Once this is achieved, the loop is exited and a game over message is displayed.
    (when (<= (user-attribute 'health) 0)
      (begin
        (describe 'user-died)
        (exit)))    

    ;; This code allows the available commands to be printed to the console.
    (print-available-commands rid)    
    (printf "\n> ")

    ;; This processes a character, converts it to a lowercase string and then converts it back to a symbol.
    (let ((input (string->symbol (string-downcase (symbol->string (read))))))

      ;; This section of code exits the loop if the user enters the 'quit' command.
      (if (eq? input 'quit) (exit) 'continue)

      ;; Here is where the given command is checked to see if it is present in the available directions.
      (if (member input (get-available-directions rid))
          (let ((direction (lookup directions rid input)))
            (if (zero? direction)
                (loop rid #f)

                ;; This section stops the user from moving away from the area if there is a guard present.
                (if (get-area-item rid 'guard)
                    (begin
                      (describe 'guard-attack)
                      (user-add-value 'health -20)
                      (loop rid #f))

                    ;; This is where the destination area is checked to see if the door present, is locked.
                    (if (get-area-item direction 'locked)
                        (begin
                          (describe 'locked)
                          (loop rid #f))

                        ;; If the door is not locked, proceed onto the destination.
                        (loop direction #t)))))

          ;; Here is where the command is checked to see if it is available in the commands list.
          (if (member input (get-available-commands rid))
              (begin 
                (process-command input rid)
                (loop rid #f))

              ;; Here is where a message is displayed in the console if the command is not valid.
              (begin
                (printf "\n~a is an invalid command!\n" input)
                (loop rid #f)))))))

;; This line of code tells the game to start the user in room 1.
(startgame 1)

I am new to functional programming so I would appreciate any help/ explanations to further my learning. Thank you

by James Patterson at April 26, 2015 09:22 PM

Connect to a server for JSON message passing-receiving [on hold]

So, I have an assignment for a discipline in my graduation course. The goal is to create a bot (AI) for a "chess" application. To do that we have to connect to the server, and send messages with JSON (receive as well). The messages are things related to the game (what the next move will be, etc.). The way to do that is in python, where everything is ready.

The problem is that I do not want to learn python, so I was thinking on doing it on Scala. To do that I have to connect to the server (which is a port: 127.0.0.1:50200), and use a JSON library to send and receive messages from the server.

Anyone knows a how would I do that? I have never done something similar in any language, so I'm kind of lost.

Obs.: Note that the application will be running on my computer as well as my BOT.

by lhahn at April 26, 2015 09:15 PM

Overloaded method call has alternatives: String.format

I wrote the following Scala code below to handle a String that I pass in, format the String, append it to a StringBuilder and return the formatted String with escaped unicode back to my caller for other processing.

The Scala compiler complains of the following on the lines where there is a String.format call with the following error:

overloaded method value format with alternatives: (x$1; java.util.Locale; x$2: String, X$3: Object*) (x$1:String,x$2: Object*) String cannot be applied to (*String, Int)

class TestClass {    
    private def escapeUnicodeStuff(input: String): String = {
            //type StringBuilder = scala.collection.mutable.StringBuilder
            val sb = new StringBuilder()
            val cPtArray = toCodePointArray(input) //this method call returns an Array[Int]
            val len = cPtArray.length
            for (i <- 0 until len) {
              if (cPtArray(i) > 65535) {
                val hi = (cPtArray(i) - 0x10000) / 0x400 + 0xD800
                val lo = (cPtArray(i) - 0x10000) % 0x400 + 0xDC00
                sb.append(String.format("\\u%04x\\u%04x", hi, lo)) //**complains here**
              } else if (codePointArray(i) > 127) {
                sb.append(String.format("\\u%04x", codePointArray(i))) //**complains here**
              } else {
                sb.append(String.format("%c", codePointArray(i))) //**complains here**
              }
            }
            sb.toString
          }

    }

How do I address this problem? How can I clean up the code to accomplish my purpose of formatting a String? Thanks in advance to the Scala experts here

by user3825558 at April 26, 2015 09:03 PM

QuantOverflow

Do intraday volume and volatility share the same properties?

volatility clustering and mean reversion are very well known properties that one could use when trading. Traders, especially in options world, do take realized vol into account (e.g. by forecasting it or looking at which percentile does the current volatility correspond).

I am wondering if also intraday volumes have the same kind of properties that can be exploited somehow.

I see that some traders look at volume profiles and use indicators like VWAP (volume-weighted averaged price) and PVP (peak volume price, thus the price where the largest intraday volume was traded). In general they assume that intraday volumes tend to generate a symmetric distribution thus following this kind of rule to forecast the price direction:

if the PVP>VWAP then the volumes distribution is skewed upside and this generates a "pressure" to prices to move downwards, at least untile the VWAP. With PVP

There is an exception to this rule: when the price action is on one the extremes of the volume distribution (e.g. price>PVP>VWAP) then the previous logic doesn't apply (even if the PVP>VWAP).

Is there any statistical evidence that intraday volumes actually tend to generate symmetric distributions thus making it possible to exploit the temporary skewness that is generated intraday?

Is there any study on that or anyone willing to share her/his experience on that?

Thank you for your help.

by opt at April 26, 2015 09:00 PM

CompsciOverflow

How to state a recurrence that expresses the worst case for good pivots?

The Problem Consider the randomized quicksort algorithm which has expected worst case running time of $\theta(nlogn)$ . With probability $\frac12$ the pivot selected will be between $\frac{n}{4}$ and $\frac{3n}{4}$(a good pivot). Also with probability $\frac12$ the pivot selected will be between 1 and $\frac{n}{4}$ or between $\frac{3n}{4}$ and $n$(i.e. a bad pivot)

1.State a recurrence that expresses the worst case for bad pivots.
2.State a recurrence that expresses the worst case for good pivots.

My Work: I understand the overall idea of quicksort - pivot all the elements around the pivot(less-> to the left, right -> to the right) and repeat the process for the elements to left of the pivot and for the elements to the right of the pivot
Here is the recurrence I came up with for 1 -$ T(n) = \begin{cases} c & \text{if $n$ is 1} \\ T(n-1) + n, & \text{if $n$ is > 1} \end{cases}$
I was able to work out this recurrence by reasoning that if I just chose the worst possible pivot(either greatest or least), I would have to traverse through all the elements(the $n$) and then repeat this process for the rest of the elements(n-1)

I am having trouble with coming up with a recurrence that expresses the worst case for good pivots. The two I came up with is if you choose a pivot of $\frac{n}{2}$ and if you choose a pivot of $\frac{n}{4}$

The recurrence for a pivot of $\frac{n}{2}$ would be $ T(n) = \begin{cases} c & \text{if $n$ is 1} \\ 2T(\lfloor(n/2)\rfloor) + n, & \text{if $n$ is > 1} \end{cases}$
I wasable to reason this out because if you had a pivot of $\frac{n}{2}$, you have to evaluate all the elements with regards to the pivot(the $n$) and then partition the equal proportions to the left and to the right of the pivot.

The recurrence for a pivot of $\frac{n}{4}$ would be $ T(n) = \begin{cases} c & \text{if $n$ is 1} \\ T(\lfloor(3n/4)\rfloor) + T(\lfloor(n/4)\rfloor) + n, & \text{if $n$ is > 1} \end{cases}$

Which one of these would express the worst case for good pivots?

by committedandroider at April 26, 2015 08:46 PM

Lobsters

QuantOverflow

Transforming Variables in Regression

I have a very simple problem that hopefully someone could help me with or at least point me in the right direction.

I am testing to see which factors affect index returns the most and would like to find the correct way to transform variables used in the multiple regression model.

For my dependent variable, I have calculated quarterly index returns.

For the independent variables set, I have collected quarterly data for: CPI%, 3 year rate, 10 year rate, unemployment rate, and a couple of exchange rates.

Should I calculate quarterly % changes for all of my variables(since I used quarterly returns for my dependent), leave them unchanged(and use index values instead of returns), or change them in a different way?

Thank you,

by Jay at April 26, 2015 08:39 PM

/r/compsci

Are there any algebraic formalisms for graph traversal problems that are useful?

My TA says I'm more likely to get confused if I think about the problems on our plate right now in terms of the algebra (MST, shortest path, TSP, minimum cut). He says that the algebra is grad-level and quite abstract. I'm not so sure it wouldn't help me connect the dots better with the stuff that's already floating around in my head.

I suspect there's deep connections because a lot of the basic terminology we've covered appears to be borrowed from math (relaxation, adjacency matrices). I wonder sometimes if traversal problems possibly relate to Kirschoff's circuit laws in some way that's useful. MST and shortest path certainly have a similar flavor to the sorts of problems to which Kirschoff's circuit laws are both the lock and key (but I don't remember those problems very well... my class skipped over those sections; I did them for interest).

Also, this is very low-hanging fruit, but I noticed the rank of the adjacency matrix will tell you if the graph is unicyclic, and any unicyclic graph would produce a full-rank matrix. It wouldn't necessarily tell you whether the graph contains a cycle, though. I cite this mainly as grounds for my hunch that there are probably other meaningful deductions to be made from treating the graph as a system, deductions that I haven't discovered yet.

I'm sure there do exist ways to model graph problems as linear algebra problems. That isn't intrinsically interesting. It would be very interesting to me if there were algebraic models that could be useful in the day-to-day design of software, but I so far haven't come across any.

submitted by Probono_Bonobo
[link] [3 comments]

April 26, 2015 08:06 PM

StackOverflow

Why Spark is running more than one process?

Recently i have a problem with Spark. I am working on small cluster (4 nodes) and i saw that spark is running (after some more complex calculations) second proccess and its causing some weird problems on this node, for example:

5/04/22 08:54:37 WARN TaskSetManager: Lost task 2.1 in stage 10.0 (TID 52, hadoop1.itx.pl): java.lang.NoSuchMethodError: clojure.lang.Reflector.invokeNoArgInstanceMember(Ljava/lang/Object;Ljava/lang/String;Z)Ljava/lang/Object;

I don't know what is the cause of the problem, but when i kill spark worker process and start them again (with one process only) it's working okay until next "cloning".

I have default spark-env settings, so SPARK_WORKERS should be 1.

by Dawid Pura at April 26, 2015 08:00 PM

CompsciOverflow

Restricted version fo CNF-SAT

Given formula $\phi$ on CNF-form in CNF-SAT. Clauses can be arbitrarily long. The problem is NP-complete and it is also given that part of the problem is that a variable can occur many times in a formula.

Another problem $CNF_3$ is introduced. Each variable can at most occur three times, with a given formula. This problem is NP-complete.

Problem: Decide if this formula is satisfiable or not. Give a method translating any $\phi$ to a $CNF_3$ formula $\psi$ such that $\phi$ is satisfiable if and only if $\psi$ is.

Following hints are given:

  • Introduce new variables in $\phi$
  • Variables $a$ and $b$ are equivalent if $(a\lor \lnot b )\land (\lnot a\lor b)$

I have trouble starting this task. Any help necessary is appreciated!

by reaper123 at April 26, 2015 08:00 PM

TheoryOverflow

Can we know how quickly the complexity of Turing machines grow?

Suppose that you consider a reasonable indexing for Turing machines, and that you consider these Turing machines M1, M2, ... , running on a blank input.

Is there a way to tell how quickly those of these machines that halt will grow in running time?

For example:

M1 - 0 cycles

M2 - 10 cycles

M3 - 45 cycles

M4 - never halts

M5 - 38 cycles

M6 - 72 cycles

...

...or is this undecidable for some reason? An example answer might be that the number of cycles that such a Turing machine takes on input "" is bounded by 2^|M|+|M|^3+50, where |M| is the length of the machine M_n, unless M_n never halts on "".

My reason for asking this question is that I was thinking about answers to this problem:

Can we not output the Kolmogorov complexity?

... and ran across this issue when thinking of a possible response.

Note, I've considered Rice's theorem, but I don't think it applies, because this a property of the machine, not the language.

by Philip White at April 26, 2015 07:33 PM

QuantOverflow

Clarification of Saturation-Reset Regimes

I have worked my way through this article, waiting to get into school I have been self-learning a bit. I have a good grasp on most of the article, but the component strategy of Saturation and Reset are a little confusing to me.

When the control parameter exceeds it's defined maximum, the article stipulates Saturation events occur.

$$I(k^* + j + 1) = \min\{I(k^*+j) + K\Delta g(k^*+j), I_\textrm{max}\}$$

Does this formula found on page 3 mean when the value of the position exceeds the maximum position allowed by the control parameter, excess is liquidated until the position is under the maximum? If not, what is going on here? Is the position simply treated as the maximum, if so, why?

Thanks for any helps.

by Tan Dollars at April 26, 2015 07:24 PM

Planet Clojure

Clojure Gazette 123

Clojure Gazette 123
Bananas, Bipolar, Relativity

Clojure Gazette

Issue 123 April 26, 2015


Editorial

Hi Clojurists,

Clojure/West happened this week. Many videos have been released (I think one or more did not make it). Of course I've been busy with those, but I haven't seen them all yet. From what I've seen, this was the best speaker lineup at a Clojure conference to date.

There are exciting times ahead. The optimism of React (in the browser and Native) is palpable. And the increasing effort to diversify the community is compelling (though there's a long way to go!). Finally, I liked the new format: talks of different length to accommodate more, smaller talks.

Rock on!
Eric Normand <eric@lispcast.com> @ericnormand

PS Please tell your friends about the Gazette! It's a great way to show support.

PPS Learn more about the Clojure Gazette and subscribe. Learn about advertising in the Gazette.

Affiliate Sponsor: Telestream ScreenFlow


I make videos. I've tried all sorts of setups across different operating systems. The only setup that works well is ScreenFlow on a Mac. It works flawlessly. It can capture audio, video, your screen, all at the same time. It also gives you an amazing video editor. Layout your videos, add annotations, zoom in on sections of the screen, etc. I don't think I could make LispCast videos without it. I recommended ScreenFlow before I was an affiliate, I still recommend them, and I use them for all my screencasts. Final Cut Pro X is great but it costs $299. ScreenFlow is a bargain at $99. Please support the Gazette and give it a try.

The ReactJS Landscape Youtube


Luke VanderHart gives a thorough overview (at Clojure/West) of the three main ReactJS ClojureScript libraries: Om, Reagent, and his own Quiescent. His descriptions lay clear the strengths and weaknesses of each.

Building CircleCI's Front end With Om Youtube


A very nice talk at Clojure/West by Brandon Bloom. I love the closing section, where he posits that Clojure + ClojureScript have the technology to build rich web applications an order of magnitude faster than other languages. We just need to put them together.

I agree with this sentiment. It is only a matter of time.

Redesiging a Broken Internet Youtube


Cory Doctorow is a sci-fi write and electronic freedom advocate. This talk explains the problems with new legislation trying to restrict the universality of universal computers and how it affects our lives. This issue is becoming more and more important as computers are embedded in everything.

Capacitive Touch Banana Piano using Clojure / Overtone Youtube


A short video showing how to make a banana keyboard that plays music with Overtone. Why not?

Prof. Sussman's Reading List


Hello? I love recommended reading from great minds.

The Relativity of Wrong


It's common to think that every scientific theory will eventually be overturned. And so anything we believe today will be wrong very soon. So why invest any belief in current ideas? Isaac Asimov explains that although theories are replaced with better ones, we can't really say that the old ones are absolutely wrong, only more wrong than the new one.

How to Tell if You've Accidentally Built a Language Youtube


Jeanine Adkisson does it again with her magic touch of insightful analysis and playful slides. This one is about why one might design a language, with some helpful hints about how to go about it. This is my favorite Clojure/West talk so far.

Spreading parentheses of love


A great writeup of the recent ClojureBridge London workshop. It's great to see something so positive. I hope the attendees go on to explore great things!

The State of Clojure on Android


A nice benchmark showing the results of last year's Google Summer of Code project called Skummet, which aimed to reduce Clojure startup times. Did it work? You'll have to read the graphs to find out.

The Bipolar Lisp Programmer


Way back in 2007, people were still wrestling with the "failure of Lisp". After falling from the peak of its promise in the 1980s, people wondered why "inferior" languages--even inferior models of computation--were popular while Lisp, obviously better for many problems, was ridiculed. This essay is one of those soul-searching pieces.
Copyright © 2015 LispCast, All rights reserved.


unsubscribe from this list    change email address    advertising information

by Clojure Gazette at April 26, 2015 07:23 PM

QuantOverflow

Good book about replicating portfolios

I want to know if anybody can suggest me a good textbook which explains in detail and in an understandable way how to create replicating portfolios of financial instruments like options "cash or nothing", "asset or nothing". Moreover I want to know if there's any good book which explains the change of measure between objective probability and risk neutral probability... Sorry for the newbie question, but I'm attending a course about this stuff and the professor provides lecture notes badly written that don't tell the whole story...

by ale42 at April 26, 2015 07:17 PM

Lobsters

StackOverflow

Why Scala library shuns dynamic binding?

If I were writing the library I would habitually write Option like this:

abstract class Option[+A] {
  def map[B](f: A => B): Option[B]
}

case object None extends Option[Nothing] {
  override def map[B](f: Nothing => B): Option[B] = None
}

case class Some[+A](a: A) extends Option[A] {
  override def map[B](f: A => B): Option[B] = new Some(f(a))
}

Notice using polymorphism for map implementation. However, the real implementation of map is fully in the Option class and it looks like this:

def map[B](f: A => B): Option[B] =
    if (isEmpty) None else Some(f(this.get))

I claim my implementation is cleaner (see the advantages of polymorphism elsewhere) and is probably faster. In Either type matching is used instead of if in similar cases, reminding me of the switch statements you see C people use when they come to Java. Interestingly, the analogous implementation in Try follows my OOP school. So I would guess a shorter solution was selected in Option. Are there any other reasons?

by jacool at April 26, 2015 07:14 PM

Example of a matrix as Applicative functor

I already asked a similar question but it was not clear enough, so I decided to rephrase it.

I know that a matrix is an applicative functor but not a monad. I am wondering if there is a simple and practical example of <*> for matrices.

by Michael at April 26, 2015 07:06 PM

Maven Archetypes for Scala web app

Is there a maven archetype for building a reactive web app with, Akka, and NoSQL database like mongo DB?

by fredyjimenezrendon at April 26, 2015 07:06 PM

/r/netsec

StackOverflow

Does a method reference in Java 8 have a concrete type and if so, what is it? [duplicate]

This question already has an answer here:

This question is pretty closely related to another one. However, I feel like the accepted answer to that question is not quite as definitive.

So, what is the type of a method reference in Java 8? Here's a little demonstration of how a method reference can be "cast" (lifted?) into a java.util.function.Function:

package java8.lambda;

import java.util.function.Function;

public class Question {
  public static final class Greeter {
    private final String salutation;

    public Greeter(final String salutation) {
      this.salutation = salutation;
    }

    public String makeGreetingFor(final String name) {
      return String.format("%s, %s!", salutation, name);
    }
  }

  public static void main(String[] args) {
    final Greeter helloGreeter = new Greeter("Hello");

    identity(helloGreeter::makeGreetingFor)
      .andThen(g -> "<<<" + g + ">>>")
      .apply("Joe");

    //Compilation error: Object is not a function interface
//    Function
//      .identity()
//      .apply(helloGreeter::makeGreetingFor)
//      .andThen(g -> "<<<" + g + ">>>")
//      .apply("Joe");

    Function
      .<Function<String,String>>identity()
      .apply(helloGreeter::makeGreetingFor)
      .andThen(g -> "<<<" + g + ">>>")
      .apply("Joe");

    //Compilation error: Cannot resolve method 'andThen(<lambda expression>)'
//    (helloGreeter::makeGreetingFor)
//      .andThen(g -> "<<<" + g + ">>>")
//      .apply("Joe");

//    java.lang.invoke.LambdaMetafactory ???
  }

  private static <I,O> Function<I,O> identity(final Function<I,O> fun1) {
    return fun1;
  }
}

So, is there a less painful (more straight-forward) way of casting a method reference into a compiled/concrete type which can be passed around?

by Andrey at April 26, 2015 06:59 PM

/r/compsci

Help with project for turing $

help with turing project, I need to make a choose your own adventure game. I will pay 50$ over e transfer, needed by the end of day. Urgent plz help

submitted by bigfreakydaddy
[link] [comment]

April 26, 2015 06:54 PM

Lobsters

CompsciOverflow

Algorithm to generate two diffuse, deranged permutations of a multiset at random

Background

$\newcommand\ms[1]{\mathsf #1}\def\msD{\ms D}\def\msS{\ms S}\def\mfS{\mathfrak S}\newcommand\mfm[1]{#1}\def\po{\color{#f63}{\mfm{1}}}\def\pc{\color{#6c0}{\mfm{c}}}\def\pt{\color{#08d}{\mfm{2}}}\def\pth{\color{#6c0}{\mfm{3}}}\def\pf{4}\def\pv{\color{#999}5}\def\gr{\color{#ccc}}\let\ss\gr$Suppose I have two identical batches of $n$ marbles. Each marble can be one of $c$ colors, where $c≤n$. Let $n_i$ denote the number of marbles of color $i$ in each batch.

Let $\msS$ be the multiset $\small\{\overbrace{\po,…,\po}^{n_1},\;\overbrace{\pt,…,\pt}^{n_2},\;…,\;\overbrace{\vphantom 1\pc,…,\pc}^{n_c}\}$ representing one batch. In frequency representation, $\msS$ can also be written as $(\po^{n_1} \;\pt^{n_2}\; … \;\pc^{n_c})$.

The number of distinct permutations of $\msS$ is given by the multinomial: $$\left|\mfS_{\msS}\right|=\binom{n}{n_1,n_2,\dots,n_c}=\frac{n!}{n_1!\,n_2!\cdots n_c!}=n! \prod_{i=1}^c \frac1{n_i!}.$$

Question

Is there an algorithm to generate two diffuse, deranged permutations $P$ and $Q$ of $\msS$ at random? (The distribution should be uniform.)

  • A permutation $P$ is diffuse if for every distinct element $i$ of $P$, the instances of $i$ are spaced out roughly evenly in $P$.

    For example, suppose $\msS=(\po^4\;\pt^4)=\{\po,\po,\po,\po,\pt,\pt,\pt,\pt\}$.

    • $\{\po, \po, \po, \pt, \pt, \pt, \pt, \po\}$ is not diffuse
    • $\{\po, \pt, \po, \pt, \po, \pt, \po, \pt\}$ is diffuse

    More rigorously:

    • If $n_i=1$, there is only one instance of $i$ to “space out” in $P$, so let $\Delta(i)=0$.
    • Otherwise, let $d(i,j)$ be the distance between instance $j$ and instance $j+1$ of $i$ in $P$. Subtract from it the expected distance between instances of $i$, defining the following: $$\delta(i,j)=d(i,j)-\frac n{n_i}\qquad\qquad\Delta(i)=\sum_{j=1}^{n_i-1} \delta(i,j)^2$$ If $i$ is evenly spaced in $P$, then $\Delta(i)$ should be zero, or very close to zero if $n_i\nmid n$.

    Now define the statistic $s(P)=\sum_{i=1}^c\Delta(i)$ to measure how much every $i$ is evenly spaced in $P$. We call $P$ diffuse if $s(P)$ is close to zero, or roughly $s(P)\ll n^2$. (One can choose a threshold $k\ll1$ specific to $\msS$ so that $P$ is diffuse if $s(P)<kn^2$.)

    This constraint recalls a stricter real-time scheduling problem called the pinwheel problem with multiset $\ms A=n/\msS$ (so that $a_i=n/n_i$) and density $\rho=\sum_{i=1}^c n_i/n=1$. The objective is to schedule a cyclic infinite sequence $P$ such that any subsequence of length $a_i$ contains at least one instance of $i$. In other words, a feasible schedule requires all $d(i,j)≤a_i$; if $\ms A$ is dense ($\rho= 1$), then $d(i,j)=a_i$ and $s(P)=0$. The pinwheel problem appears to be NP-complete.

  • Two permutations $P$ and $Q$ are deranged if $P$ is a derangement of $Q$; that is, $P_i ≠ Q_i$ for every index $i\in[n]$.

    For example, suppose $\msS=(\po^2\;\pt^2)=\{\po,\po,\pt,\pt\}$.

    • $\{\po, \pt, \po, \pt\}$ and $\{\po, \po, \pt, \pt\}$ are not deranged
    • $\{\po, \pt, \po, \pt\}$ and $\{\pt, \po, \pt, \po\}$ are deranged

Exploratory analysis

I am interested in the family of multisets with $n=20$ and $n_i=4$ for $i\lesssim4$. In particular, let $\msD=(\gr1^4\,\gr2^4\,\gr3^4\,\gr4^3\,\gr5^2\,\gr6^1\,\gr7^1\,\gr8^1)$.

  • The probability that two random permutations $P$ and $Q$ of $\msD$ are deranged is about 3%.

    This can be calculated as follows, where $L_k$ is the $k$th Laguerre polynomial: \begin{align*} \left|{\mathfrak D}_{\msD}\right| &=\int_0^\infty \!\!dt\; e^{-t}\, \prod_{i=1}^c L_{n_i}(t) =\int_0^\infty \!\!dt\; e^{-t}\, \bigl(L_4(t)\bigr)^3\bigl(L_3(t)\bigr)\bigl(L_2(t)\bigr)\bigl(L_1(t)\bigr)^3\\ &=4.5\times10^{11}\\ \left|\mfS_{\msD}\right| &=n!\prod_{i=1}^c \frac1{n_i!} =\frac{20!}{(4!)^3\,(3!)\,(2!)\,(1!)^3} =1.5\times10^{13}\\ p&=\left|{\mathfrak D}_{\msD}\right|/ \left|\mfS_{\msD}\right|\approx0.03\end{align*} See here for an explanation.

  • The probability that a random permutation $P$ of $\msD$ is diffuse is about 0.01%, setting the arbitrary threshold at roughly $s(P)<25$.

    Below is an empirical probability plot of 100,000 samples of $s(P)$ where $P$ is a random permutation of $\msD$.

    At medium sample sizes, $s(P)\sim \text{Gamma}(\alpha\approx8,\beta\approx18)$.

    \begin{array}{ccl}\renewcommand\mfm[1]{\textbf{#1}} \hline P & s(P) & \text{cdf}(s(P)) \\ \hline \{\po, \ss8, \pt, \pth, \pf, \po, \pv, \pt, \pth, \ss6, \po, \pf, \pt, \pth, \ss7, \po, \pv, \pt, \pf, \pth\} & \frac{11}9\approx1\, & <10^{-5} \\ \{\ss8, \pt, \pth, \pf, \po, \ss6, \pv, \pt, \pth, \pf, \po, \ss7, \po, \pt, \pth, \pv, \pf, \po, \pt, \pth\} & \frac{140}9\approx16 & <10^{-4} \\ \{\pth, \ss6, \pv, \po, \pth, \pf, \pt, \po, \pt, \ss7, \ss8, \pv, \pt, \pf, \po, \pth, \pth, \pt, \po, \pf\} & \frac{650}9\approx72 & \phantom{<1}0.05 \\ \{\pth, \po, \pth, \pf, \ss8, \pt, \pt, \po, \po, \pv, \pth, \pth, \pt, \ss6, \pf, \pf, \pt, \po, \ss7, \pv\} & \frac{1223}9\approx136 & \phantom{<1}0.45 \\ \{\pf, \po, \po, \pf, \pv, \pv, \po, \pth, \pth, \ss7, \po, \pt, \pt, \pf, \pth, \pth, \ss8, \pt, \pt, \ss6\} & \frac{1697}9\approx189 & \phantom{<1}0.80 \\ \hline \end{array}

The probability that two random permutations are valid (both diffuse and deranged) is around $v\approx(0.03)(0.0001)^2\approx10^{-10}$.

Non-viable algorithms

A common “fast” algorithm to generate a random derangement of a set is rejection-based:

do
    P ← random_permutation(D)
until is_derangement(D, P)
return P

which takes approximately $e$ iterations, since there are roughly $n!/e$ possible derangements. However a rejection-based randomized algorithm would not be efficient for this problem, as it would take on the order of $1/v\approx10^{10}$ iterations.

In the algorithm used by Sage, a random derangement of a multiset “is formed by choosing an element at random from the list of all possible derangements.” Yet this too is inefficient, as there are $v\,|\mfS_{\msD}|^2\approx10^{16}$ valid permutations to enumerate, and besides, one would need an algorithm just to do that anyway.

Further questions

What is the complexity of this problem? Can it be reduced to any familiar paradigm, such as network flow, graph coloring, or linear programming?

by hftf at April 26, 2015 06:39 PM

StackOverflow

Scala: understanding how to make my method return the proper return type of Array

I have written the following Scala code:

class MyTestApi {
     private def toCPArray(inputStr: String): Array[Int] = {
      val len = inputStr.length
    //invoke ofDim of Scala.Array
    val cpArray = Array.ofDim[Int](inputStr.codePointCount(0, len))
    var i = 0
    var j = 0
    while (i < len) {
      cpArray(j += 1) = inputStr.codePointAt(i)
      i = inputStr.offsetByCodePoints(i, 1)
    }
    cpArray
  }           

}

This is what I want to accomplish:
1) I would create an instance of class MyTestApi and then invoke the method toCPArray and pass to it a parameter of type String. I would then like this method to return me an Array of type Int.

However as it stands now, the Scala IDE is complaining about this line: cpArray(j += 1) = inputStr.codePointAt(i)

It complains: type mismatch; Found: Unit required: Int

Two things I would like to accomplish are: How would I fix this method? (or is it a function)
My hope is, after I understand what it takes to fix this method (or function) I will be able to return the appropriate type. Also, I should be in better position to understand the difference between a method and a function.

So far my research on stackoverflow and Martin Odersky's book seems to suggests to me that what I wrote is a method because it is invokded on an instance of the underlying class. Is my understanding right on that?

After it is fixed, how can i rewrite it in a more Scalaesque way, by getting rid of the var. The code looks more C or java like right now and is a little long in my opinion, after all that I have studied about Scala so far.

Thanks for any help in refactoring the above code to accomplish my learning objectives.

by user3825558 at April 26, 2015 06:34 PM

/r/compsci

[Course] iOS Development Course - learn the essentials, build a complete app in < 10 hrs

Thanks for taking the time to read my post. I am planning to create a fast paced iOS course targeting experienced developers (who want to learn the iOS framework). Few FAQ's, related to the course.

Please share some feedback / suggestions / improvements and support if you like it

  • What will be covered in the course

The kickstart video gives a grief intro into the app we will building in this course. All the contents are covered on the kickstarter page ( http://kck.st/1Jg5nQm )

  • How is this different than the hundreds if not thousands of existing courses?

Short - to the point, targeted at experienced developers having prior programming experience, intermediate topics covered in a short frame of time ( i believe other courses teach this in time frame of 120 hours or more and charge a lot more) . Also, you are welcome to suggest few advanced topics you want covered in the course.

  • Why should people fund this when existing ones already exist? Why does the creator need $5k to put together course material?

This my entrepreneurial experiment. That's the reason i have chosen kickstarter to assess if there is a demand for a course like this. If yes, i will create the course. Recording / editing course videos are more time consuming than just writing code or blogs. Based on my estimates 5k sounds like a legit amount for the effort required.

  • Who are the developers who are creating the course, what have they done?

Please check the kickstarter page for my linked in profile and github projects. You can find a lot of information there

submitted by karanjude
[link] [comment]

April 26, 2015 06:21 PM

/r/clojure

Why is the startup time of Clojurescript programs slower than JVM Clojure programs?

I am not sure if it's generally true, but in my experience Clojure programs I run in JVM start up pretty slow. But Clojurescript programs start up seemingly instantly. That and the ease of async programming (with immutability!) makes at least my CLJS programs generally better performing than my JS programs.

I can do pretty much the same in cljs as in clojure, so what feature of clojurescript makes it's startup fast?

submitted by thdgj
[link] [9 comments]

April 26, 2015 06:19 PM

StackOverflow

Spring RESTful API :: Using Traits for the @RestController methods

I'm trying to set up spring to use scala traits for the REST controllers.

Let's say I need 2 resources exposed: Author and Publication. Here's the author:

AuthorController.java
@RestController
class AuthorController extends BaseController
  with ReadTrait
{
  def getSuffix(): String = {
    "author"
  }
}

and here's the publication:

PublicationController.java
@RestController
class PublicationController extends BaseController
  with ReadTrait
{
  def getSuffix(): String = {
    "publication"
  }
}

they both use the read trait.

and the read trait I need for both:

ReadTrait.java
@RestController
trait ReadTrait {

  def getSuffix(): String

  @Secured(Array("ROLE_USER"))
  @RequestMapping(value = Array("/{resource}"), method = Array(RequestMethod.GET))
  def read(@PathVariable("resource") resource: String): Author = {

    if (resource == "author") {
      // ... 
    }
  }
}

So the problem that I'm facing is that Spring blows up when those 2 classes use the same trait with this:

org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'requestMappingHandlerMapping' defined in class path resource [org/springframework/boot/autoconfigure/web/WebMvcAutoConfiguration$EnableWebMvcConfiguration.class]: Invocation of init method failed; nested exception is java.lang.IllegalStateException: Ambiguous mapping found. Cannot map 'publicationController' bean method 
public abstract com.example.project.core.Author com.example.project.api._trait.ReadTrait.read(java.lang.String)
to {[/{resource}],methods=[GET],params=[],headers=[],consumes=[],produces=[],custom=[]}: There is already 'authorController' bean method
public abstract com.example.project.core.Author com.example.project.api._trait.ReadTrait.read(java.lang.String) mapped.
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1574)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:539)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:476)
        at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:303)
        at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
        at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:299)
        at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:194)
        at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:755)
        at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:757)
        at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:480)
        at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.refresh(EmbeddedWebApplicationContext.java:118)
        at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:686)
        at org.springframework.boot.SpringApplication.run(SpringApplication.java:320)
        at org.springframework.boot.SpringApplication.run(SpringApplication.java:957)
        at org.springframework.boot.SpringApplication.run(SpringApplication.java:946)
        at com.example.project.Application.main(Application.java:10)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:497)
        at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:53)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalStateException: Ambiguous mapping found. Cannot map 'publicationController' bean method 
public abstract com.example.project.core.Author com.example.project.api._trait.ReadTrait.read(java.lang.String)
to {[/{resource}],methods=[GET],params=[],headers=[],consumes=[],produces=[],custom=[]}: There is already 'authorController' bean method
public abstract com.example.project.core.Author com.example.project.api._trait.ReadTrait.read(java.lang.String) mapped.
        at org.springframework.web.servlet.handler.AbstractHandlerMethodMapping.registerHandlerMethod(AbstractHandlerMethodMapping.java:212)
        at org.springframework.web.servlet.handler.AbstractHandlerMethodMapping.detectHandlerMethods(AbstractHandlerMethodMapping.java:184)
        at org.springframework.web.servlet.handler.AbstractHandlerMethodMapping.initHandlerMethods(AbstractHandlerMethodMapping.java:144)
        at org.springframework.web.servlet.handler.AbstractHandlerMethodMapping.afterPropertiesSet(AbstractHandlerMethodMapping.java:123)
        at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping.afterPropertiesSet(RequestMappingHandlerMapping.java:126)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1633)
        at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1570)
        ... 21 common frames omitted

Everything builds fine and works as expected when I remove one of the Controllers. It also builds fine but doesn't work if I remove the @RestController annotation on the ReadTrait. Is the approach I'm trying out wrong? How can I make it work?

Thanks

by Trevor Donahue at April 26, 2015 06:18 PM

Does Functional Programming Replace GoF Design Patterns?

Since I started learning F# and OCaml last year, I've read a huge number of articles which insist that design patterns (especially in Java) are workarounds for the missing features in imperative languages. One article I found makes a fairly strong claim:

Most people I've met have read the Design Patterns book by the Gang of Four. Any self respecting programmer will tell you that the book is language agnostic and the patterns apply to software engineering in general, regardless of which language you use. This is a noble claim. Unfortunately it is far removed from the truth.

Functional languages are extremely expressive. In a functional language one does not need design patterns because the language is likely so high level, you end up programming in concepts that eliminate design patterns all together.

The main features of functional programming include functions as first-class values, currying, immutable values, etc. It doesn't seem obvious to me that OO design patterns are approximating any of those features.

Additionally, in functional languages which support OOP (such as F# and OCaml), it seems obvious to me that programmers using these languages would use the same design patterns found available to every other OOP language. In fact, right now I use F# and OCaml everyday, and there are no striking differences between the patterns I use in these languages vs the patterns I use when I write in Java.

Is there any truth to the claim that functional programming eliminates the need for OOP design patterns? If so, could you post or link to an example of a typical OOP design pattern and its functional equivalent?

by Juliet at April 26, 2015 06:05 PM

Fefe

Was macht eigentlich so ein Pressesprecher auf einer ...

Was macht eigentlich so ein Pressesprecher auf einer Pressekonferenz? Falls ihr mal gucken wollt…

April 26, 2015 06:01 PM

StackOverflow

Play Framework: Converting strings to numbers while validating JSON does not work

Given the following JSON...

{
    "ask":"428.00",
    "bid":"424.20"
}

... I need to convert the values of ask and bid to numbers:

{
    "ask": 428.00,
    "bid": 424.20
}

As already discussed here, I just need to create a validator like this:

def validate = (
  ((__ \ 'ask).json.update(toNumber)) ~
  ((__ \ 'bid).json.update(toNumber))
).reduce

private def toNumber(implicit reads: Reads[String]) = {
  Reads[JsNumber](js =>
    reads.reads(js).flatMap { value =>
      parse[Double](value) match {
        case Some(number) => JsSuccess(JsNumber(number))
        case _ => JsError(ValidationError("error.number", value))
      }
    }
  )
}

The problem is that only the last node (bid) gets actually converted to a number... and the resulting JSON looks like this:

}
    "ask":"428.00",
    "bid":424.20
}

Am I missing something?

EDIT

Using andThen only works if the JSON structure only contains strings to convert to numbers... whereas if the JSON structure already contains numeric fields it doesn't. Given the following JSON [last is already numeric]:

}
    "ask":"428.00",
    "bid":"424.20",
    "last": 430.05
}

If I modify my validator like this [replaced ~ with andThen and removed reduced]...

def validate = (
  ((__ \ 'ask).json.update(toNumber)) andThen
  ((__ \ 'bid).json.update(toNumber)) andThen
  ((__ \ 'last).json.pickBranch(Reads.of[JsNumber]))
)

... then I get the following error when trying to validate my JSON above:

JsError(List((/bid/last,List(ValidationError(error.path.missing,WrappedArray())))))

by j3d at April 26, 2015 05:47 PM

Planet Emacsen

Jorgen Schäfer: Buttercup 1.1 Released

I just released version 1.1 of Buttercup, the Behavior-Driven Emacs Lisp Testing framework.

Buttercup is a behavior-driven development framework for testing Emacs Lisp code. It is heavily inspired by Jasmine.

Installation and Use

Buttercup is available from Marmaladeand MELPA Stable.

Example test suite:

(describe "A suite"
(it "contains a spec with an expectation"
(expect t :to-be t)))

Suites group tests, and suites can be nested. Contrary to ERT, suites can share set-up and tear-down code for tests, and Buttercup comes with built-in support for mocks in the form of spies. See the package homepage above for a full description of the syntax for test suites and specs.

Buttercup comes with a shell script to run the default discover runner. If used together with cask, cask exec buttercupwill find, load and run test suites in your project.

Changes Since 1.0

  • Buttercup now sports a full reporter interface, in case you want to write your own reporter. By default, there is a batch and an interactive reporter.
  • Reporters now display failed tests properly at the end of the test run, together with a properly-formatted backtrace.
  • Pending specs and disabled suites as in Jasmine are now supported.
  • Emacs 24.5 is now officially supported.
  • There’s now a buttercup script to run the most common command line.
  • Test runners are now autoloaded.
  • Test discovery now ignores dot files and dot directories.
  • Buttercup tests can now be instrumented with Edebug.

by Jorgen Schäfer (noreply@blogger.com) at April 26, 2015 05:46 PM

/r/compsci

/r/netsec

CompsciOverflow

Build-Max-Heap vs. HeapSort

I'm not sure whether my definition for these 2 terms are correct. Hence, could you help me verify that:

HeapSort: A procedure which sorts an array in place.

Build-Max-Heap: A procedure which runs in linear time, produces a max- heap from an unordered input array.

Is worst-case input = worst-case running time?

If so, given the size n for Build-Max-Heap, would its worst case input be the same as the HeapSort which is $\mathcal{O}(n \log n)$?

by iterence at April 26, 2015 05:22 PM

/r/emacs

Request : typewriter-mode

They said emacs can do anything. I was wondering if the following things could be done in emacs. I am not a programmer, I am still learning emacs so forgive me for any extravagant request.

  • Editing : Backspace disabled. However, we are not used to that like on the typewriter, whats on the page is on the page. So I say you could not delete after having pressed space, that is once you move on from a word there is no way you could edit it.

  • Navigation : Limited. This could be customized. I'd prefer to not to have navigation beyond the visible buffer.

  • Other tools : No extra stuff, no spell check, no dictionary, theasures. Just letters on the page.

  • Export : Once you are out of the mode, you'd get the same text on a new buffer and the original file will remain the way you wrote it. It would give you an option to take it to a new file or an existing one. You could even set up a default file to transfer it to.

  • Multitasking : On a typewriter you can't just quit and open up your browser. Similarly, we could get to enter a specified amount of time or word count before each writing session and it would not allow you to move out of it, that is change buffers, or open new windows, if possible not allow you to leave emacs at all. That is unless you have met the condition you set in the beginning.

  • Sounds : I hear some people like the clickty sound of the typewriter, I don't like it myself. Since its the typewriter mode the sound could be appealing to some folks out there.

If this is too much to ask for, I'd like you to point me towards learning how to writer packages. I assume that it will require good knowledge of lisp and some basic programming skills? Well then it could take a few months. However if something like this already exists, it could be easier to mash already existing packages into the above.

submitted by curious-scribbler
[link] [9 comments]

April 26, 2015 05:09 PM

Planet Clojure

StackOverflow

Merging RDDs using Scala Apache Spark

I have 2 RDDs.

RDD1: ((String, String), Int)
RDD2: (String, Int)

For example:

    RDD1

    ((A, X), 1)
    ((B, X), 2)
    ((A, Y), 2)
    ((C, Y), 3)

    RDD2

    (A, 6)
    (B, 7)
    (C, 8)

Output Expected

    ((A, X), 6)
    ((B, X), 14)
    ((A, Y), 12)
    ((C, Y), 24)

In RDD1, (String, String) combination is unique and in RDD2, every string key is unique. The score of A from RDD2 (6) gets multiplied with all the score values of entries that have A in its key in RDD1.

14 = 7 * 2
12 = 6 * 2
24 = 8 * 3

I wrote the following but gives me an error on case:

val finalRdd = countRdd.join(countfileRdd).map(case (k, (ls, rs)) => (k, (ls * rs)))

Can someone help me out on this ?

by AngryPanda at April 26, 2015 04:57 PM

Lobsters

/r/compsci

/r/netsec

Dave Winer

Great "Blue Sky" platforms

The hairball

Our current platform is a mess:

  1. CSS.

  2. HTML.

  3. JavaScript.

  4. A server.

Think of how much you have to learn to become "full stack" in this world.

At least four different syntaxes, and a CSS preprocessor.

HTML is XML. JavaScript is like C or Pascal. The server could be written in any number of different languages. And JSON. None of them are going away.

I've been working in this world for many years, and right now would be at a loss to write a canonical "hello world" app.

What a Tower of Babel. What an opportunity to topple it, esp given all the people these days who want to program.

It grows when it's simple

The big strides in tech happen when the platform gets reduced to simplicity. Examples include:

  1. Unix with C.

  2. Apple II with UCSD P-System.

  3. MS-DOS with Turbo Pascal.

  4. The Web with a plain text editor.

I call these blue sky platforms

In all these systems, lifting the hood is child's play.

Typing and modifying "Hello World" is easy. From there, there are no huge cliffs in the way of becoming a master.

1995: "A platform must have potential, or open space. I call this blue sky."

Digging out of the mess

We can get back to blue sky any time we want.

I would start from Node.js and build out.

The web browser, as currently configured would have to be replaced by something that's thoughtfully designed. Or maybe a platform built on top of CSS+HTML+JavaScript, hiding it behind a factored interface? Not sure. But we'll never have great growth until we factor out the huge hairball sitting between ideas and implementation.

April 26, 2015 04:49 PM

TheoryOverflow

Percentage guided weight distribution

I have a say 3 different algorithm which gives a total weight value for individual algorithm. I would like to give weights for each algorithm. Currently I am using e.g.

Algorithm 1 - 8 elements - sum of value is 720 Algorithm 2 - 7 elements - sum of value is 681 Algorithm 3 - 1 element - sum of value is 100

I am currently using a simple algorithm for weight

Algorithm 1 - 720/(720+681+100)

Algorithm 2 - 681/(720+681+100)

Algorithm 3 - 100/(720+681+100)

I would like to find out any better or optimisation for me to assign algorithm in theory or practical manner?

by biz14 at April 26, 2015 04:45 PM

StackOverflow

Error with capifony deployment to FreeBSD system

I have a problem with uploading code to FreeBSD server.

Deployment output:

--> Updating code base with checkout strategy
Password for user@server: 
--> Creating cache directory................................✔
--> Creating symlinks for shared directories................✔
--> Creating symlinks for shared files......................✔
--> Normalizing asset timestamps............................✔
--> Copying vendors from previous release...................✔
--> Downloading Composer....................................✘
*** [deploy:update_code] rolling back
failed: "sh -c 'sh -c '\\''cd /var/www/domain.com/releases/20140215073342 && curl -s http://getcomposer.org/installer | php'\\'''" on 0.0.0.0

And if i run the code:

sh -c 'sh -c '\\''cd /var/www/domain.com/releases/20140215073342 && curl -s http://getcomposer.org/installer | php'\\'''

on server, i have a error:

-bash: php\\: command not found

Can this error in freebsd system with escape special chars?

Thank.

UPD

PHP Cli installed (Version: 5.5.9)

by ZhukV at April 26, 2015 04:40 PM

/r/netsec

QuantOverflow

Why is Brownian motion merely 'almost surely' continuous?

Why is Brownian motion required to be merely almost surely continuous instead of continuous?

For example, this is stated as condition 2 in this article in section 1, Characterizations of the Wiener process, where it says "The function $t \rightarrow W_t$ is almost surely everywhere continuous." What is an example of a Brownian motion where there is a point at which the motion is not continuous?

by user50229 at April 26, 2015 04:19 PM

/r/netsec

Lobsters

Faster Node `require`

Submitting this since I wrote about how slow it was last weekend! Added this cacher to my test file & observed a 40% increase in startup time.

Comments

by kb at April 26, 2015 04:02 PM

CompsciOverflow

Recurrence relation chip and conquer

Can anyone explain how to find the $\Theta()$ of this equation... $$T(n) = 3T(n-4) + cn$$ When I solve this problem I get this using the $k$ -th iteration... $$T(n) = 3^{k}T(n-4k) + 3^{k-1}c(n-2(k-1)) + 3^{k-2}c(n-2(k-2)) + ... + cn$$ I factor out c and get this... $$T(n) = 3^{k}T(n-2k) + c(3^{k-1}(n-2(k-1)) + 3^{k-2}(n-2(k-2)) + ... n)$$ I'm not sure if we need to find an upper and lower bounds or solve the geometric series. Any suggestions? I need to find $\Theta()$ and we didn't cover master theorem so this is how my class was taught to approach these.

by MD_90 at April 26, 2015 04:02 PM

/r/clojure

CompsciOverflow

Find equivalent LTL formula, without Y (Yesterday) operator. How can I handle first state?

enter image description here

The task is to find an equivalent LTL formula for $G(a \Rightarrow Yb)$, which doesn't contain the Y operator. My idea is to search for invalid path patterns with 2 $a$'s in a row, e.g. bbbbaab.

Therefore I'm thinking of $G(Xa \Rightarrow b)$, so whenever an $a$ occurs in the next state the the current state must be true for $b$. But I have a problem with the starting states. Consider a path starting with $a$, e.g. s3->s4->... this path would be false for the original formula (iv), since Yb is always false for the first state. But this exact state would be true for the formula $G(Xa \Rightarrow b)$.

How do I have to modify my formula to also cover the starting states correctly?

by Mad A. at April 26, 2015 03:46 PM

Planet Clojure

Clojure Don’ts: Concat

Welcome to what I hope will be an ongoing series of Clojure do’s and don’ts. I want to demonstrate not just good patterns to use, but also anti-patterns to avoid.

Some of these will be personal preferences, others will be warnings from hard-won experience. I’ll try to indicate which is which.

First up: concat.

Concat, the lazily-ticking time bomb

concat is a tricky little function. The name suggests a way to combine two collections. And it is, if you have only two collections. But it’s not as general as you might think. It’s not really a collection function at all. It’s a lazy sequence function. The difference can be important.

Here’s an example that I see a lot in the wild. Say you have a loop that builds up some result collection as the concatenation of several intermediate results:1

(defn next-results
  "Placeholder for function which computes some intermediate
  collection of results."
  [n]
  (range 1 n))

(defn build-result [n]
  (loop [counter 1
         results []]
    (if (< counter n)
      (recur (inc counter)
             (concat results (next-results counter)))
      results)))

The devilish thing about this function is that it works just fine when n is small.

(take 21 (build-result 100))
;;=> (1 1 2 1 2 3 1 2 3 4 1 2 3 4 5 1 2 3 4 5 6)

But when n gets sufficiently large,2 suddenly this happens:

(first (build-result 4000))
;; StackOverflowError   clojure.core/seq (core.clj:133)

In the stack trace, we see concat and seq repeated over and over:

(.printStackTrace *e *out*)
;; java.lang.StackOverflowError
;;      at clojure.core$seq.invoke(core.clj:133)
;;      at clojure.core$concat$fn__3955.invoke(core.clj:685)
;;      at clojure.lang.LazySeq.sval(LazySeq.java:40)
;;      at clojure.lang.LazySeq.seq(LazySeq.java:49)
;;      at clojure.lang.RT.seq(RT.java:484)
;;      at clojure.core$seq.invoke(core.clj:133)
;;      at clojure.core$concat$fn__3955.invoke(core.clj:685)
;;      at clojure.lang.LazySeq.sval(LazySeq.java:40)
;;      at clojure.lang.LazySeq.seq(LazySeq.java:49)
;;      at clojure.lang.RT.seq(RT.java:484)
;;      at clojure.core$seq.invoke(core.clj:133)
;;      at clojure.core$concat$fn__3955.invoke(core.clj:685)
;;      at clojure.lang.LazySeq.sval(LazySeq.java:40)
;;      at clojure.lang.LazySeq.seq(LazySeq.java:49)
;;      ... hundreds more ...

So we have a stack overflow. But why? We used recur. Our code has no stack-consuming recursion. Or does it? (cue ominous music)

Call the bomb squad

Let’s look at the definition of concat more closely. Leaving out the extra arities and chunked sequence optimizations, it looks like this:

(defn concat [x y]
  (lazy-seq
    (if-let [s (seq x)]
      (cons (first s) (concat (rest s) y))
      y)))

lazy-seq is a macro that wraps its body in function and then wraps the function in a LazySeq object.

The loop in build-result calls concat on the LazySeq returned by the previous concat, creating a chain of LazySeqs like this:

LazySeq-tree.png

Calling seq forces the LazySeq to invoke its function to realize its value. Most Clojure sequence functions, such as first, call seq for you automatically. Printing a LazySeq also forces it to be realized.

In the case of our concat chain, each LazySeq’s fn returns another LazySeq. seq has to recurse through them until it finds an actual value. If this recursion goes too deep, it overflows the stack.

Just constructing the sequence doesn’t trigger the error:

(let [r (build-result 4000)]
  nil)
;;=> nil

It only overflows when we try to realize it:

(let [r (build-result 4000)]
  (seq r)
  nil)
;; StackOverflowError   clojure.lang.RT.seq (RT.java:484)

This is a nasty bug in production code, because it could occur far away from its source, and the accumulated stack frames of seq prevent us from seeing where the error originated.

Don’t concat

The fix is to avoid concat in the first place. Our loop is building up a result collection immediately, not lazily, so we can use a vector and call into to accumulate the results:

(defn build-result-2 [n]
  (loop [counter 1
         results []]
    (if (< counter n)
      (recur (inc counter)
             (into results (next-results counter)))
      results)))

This works, at the cost of realizing the entire collection up front:

(time (doall (take 21 (build-result-2 4000))))
;; "Elapsed time: 830.66655 msecs"
;;=> (1 1 2 1 2 3 1 2 3 4 1 2 3 4 5 1 2 3 4 5 6)

This specific example could also be written as a proper lazy sequence like this:

(defn build-result-3 [n]
  (mapcat #(range 1 %) (range 1 n)))

Which avoids building the whole sequence in advance:

(time (doall (take 21 (build-result-3 4000))))
;; "Elapsed time: 0.075421 msecs"
;;=> (1 1 2 1 2 3 1 2 3 4 1 2 3 4 5 1 2 3 4 5 6)

Don’t mix lazy and strict

There’s a more general principle here:
Don’t use lazy sequence operations in a non-lazy loop.

If you’re using lazy sequences, make sure everything is truly lazy (or small). If you’re in a non-lazy loop, don’t build up a lazy result.

There are many variations of this bug, such as:

(first (reduce concat (map next-results (range 1 4000))))
;; StackOverflowError   clojure.core/seq (core.clj:133)
(nth (iterate #(concat % [1 2 3]) [1 2 3]) 4000)
;; StackOverflowError   clojure.core/seq (core.clj:133)
(first (:a (apply merge-with concat
                  (map (fn [n] {:a (range 1 n)})
                       (range 1 4000)))))
;; StackOverflowError   clojure.core/seq (core.clj:133)

It’s not just concat either — any lazy sequence function could potentially cause this. concat is just the most common culprit.

Footnotes:

1

All these examples use Clojure version 1.6.0

2

Depending on your JVM settings, it may take more or fewer iterations to trigger a StackOverflowError.

by Stuart at April 26, 2015 03:44 PM

CompsciOverflow

Computing theory: can a single node be a subgraph?

Can a single node be considered a subgraph?

For example, if I had this graph, G:

X-----Y

and I deleted Y, leaving me with the graph

X

is this a subgraph (induced) of G?



What about the following argument?

Assume a single node can be considered a graph. Any graph is an induced subgraph of itself. Therefore, a single node graph has a single-node induced subgraph.

Though this is only valid if a single node can be considered a graph.



In computing theory, what is the generally accepted norm?

by Tedfoo at April 26, 2015 03:33 PM

/r/compsci

What assumptions are required to prove that a given hash function does not have a polynomial time inverse?

Obviously P != NP is necessary, but I'm wondering if some mainstream cryptographic hash functions can be shown to be hard to invert with just that. Bonus for constructive proofs.

submitted by PM_ME_UR_OBSIDIAN
[link] [16 comments]

April 26, 2015 03:26 PM

QuantOverflow

How to fit a SARIMA + GARCH in R?

I'd like to fit a non stationary time series using a SARIMA + GARCH model. I have not found any package that allow me to fit this model. I'm using rugarch:

model=ugarchspec( variance.model = list(model = "sGARCH", garchOrder = c(1, 1)), mean.model = list(armaOrder = c(2, 2), include.mean = T), distribution.model = "sstd") modelfit=ugarchfit(spec=model,data=y)

but it allow me only to fit an ARMA + GARCH model. Can you help me? Thank you

by Manuel at April 26, 2015 03:19 PM

Stock price is a martingale if the riskless interest rate is zero?

I came across a question as such:

Suppose company IBC is trading at \$75 per share. What does it cost to construct a derivative security that pays exactly one dollar when IBC hits $100 for the first time? Ignore dividends, assume a riskless interest rate of zero, assume all assets are infinitely divisible, ignore any short sale restriction.

There is a solution using the no-arbitrage argument. But my intuition is to use the martingale. Since the interest rate is zero, if the stock follows geometric Brownian motion, then the drift term become zero, so the stock price becomes a martingale. If we use the martingale property $E[S_{0}] = E[S_{T}]$, and assume the upper bound of stock price is 100, lower bound is 0, then we can calculate the probability $\alpha$ of hitting \$100 at time T $$ E[S_{T}] = \alpha\times\$100 + (1-\alpha)\times\$0 = E[S_{0}] = 75 $$ we get $\alpha = 0.75$, so the expected pay off of the derivative is $$ \$0.75=0.75\times\$1 + (1-0.75)\times\$0$$ hense the price of the derivative should be \$0.75. I don't have much background in Probability or martingale theory, so is this a valid argument to solve this problem?

by astr627 at April 26, 2015 03:10 PM

CompsciOverflow

Is any language that can express its own compiler Turing-complete?

A comment over on tex.SE made me wonder. The statement is essentially:

If I can write a compiler for language X in language X, then X is Turing-complete.

In computability and formal languages terms, this is:

If $M$ decides $L \subseteq L_{\mathrm{TM}}$ and $\langle M \rangle \in L$, then $F_L = \mathrm{RE}$.

Here $L_{\mathrm{TM}}$ denotes the language of all Turing machine encodings and $F_L$ denotes the set of functions computed by machines in $L$.

Is this true?

by Raphael at April 26, 2015 03:07 PM

Does location transparency imply access transparency?

In distributed systems theory, I have found the definition that a distributed system requires, among others, location and access transparency.

I was wondering if location transparency does not already include access transparency.

Wikipedia defines the two as follows:

Access transparency – Regardless of how resource access and representation has to be performed on each individual computing entity, the users of a distributed system should always access resources in a single, uniform way.

Location transparency – Users of a distributed system should not have to be aware of where a resource is physically located.

If I am not to become aware of where a resource is physically located, doesn't that automatically imply that I have to be able to access all resources in a uniform way?

If yes, could you leave out access transparency from the definition without changing its meaning?

by helm at April 26, 2015 03:02 PM

/r/emacs

LaTeX/P: slow as can be

Whenever I open a .tex file I have to wait about 5-10 seconds for something to happen. Because the year is not currently 1998, that seems to indicate a problem. The issue seems to be that my LaTeX-mode has been hijacked by a much bulkier mode called LaTeX/P. I'm pretty sure all this began when I installed something called AucTex, a while ago, somewhat by mistake.

When I run emacs with the -q command, it opens with no problem and gives me my old TeX-mode. (The really weird thing is, commenting out my entire .emacs file doesn't have the same effect, which leads me to believe that my understanding of what the -q flag does is somewhat incomplete.)

Any advice?

submitted by browsin_is_a_paddlin
[link] [14 comments]

April 26, 2015 02:53 PM

CompsciOverflow

Variation on Insertion Sort

I'm writing insertion sort in scheme, but due to the difficulty of writing it recursively within the constraints of list processing of scheme, I made what seems like an insignificant change to the "find lowest" or inner loop.

I'll reproduce the algorithm here in Python (aka the universal psuedocode) to explain. I'll also use iteration so it's easier to understand.

def insertion_sort(list):
  for i in range(len(list)):
    for j in range(i + 1, len(list)):
      if list[j] < list[j]:
        list[i], list[j] = list[j], list[i]

The difference is that I repeatedly swap the old lowest with new lowest numbers as they're found, and reinsert the old lowest back into the list where the new lowest was.

Can I still (correctly) refer to this as insertion sort and is there a significant effect on efficiency?

Also, just in case there's anyone here who loves scheme:

(define lowest-to-front-helper
  (lambda (list lowest)
    (if (eq? (cdr list) '())
        (if (>= (car list) lowest)
            (cons lowest list)
            (cons (car list)
                  (cons lowest '())))
        (if (< (car list) lowest)
            (let ((temp (lowest-to-front-helper (cons lowest (cdr list)) (car list))))
              (cons (car temp)
                    (cdr temp)))
            (let ((temp (lowest-to-front-helper (cdr list) lowest)))
              (cons (car temp)
                    (cons (car list) (cdr temp))))))))

; Permutes list such that the lowest element is moved to the front. (Non-stable.)
(define lowest-to-front
  (lambda (list)
    (lowest-to-front-helper (cdr list) (car list))))

(define insertion-sort
  (lambda (list)
    (if (eq? (cdr list) '())
        list
        (let ((temp (lowest-to-front list)))
          (cons (car temp) (insertion-sort (cdr temp)))))))

by Tyler at April 26, 2015 02:45 PM

StackOverflow

Getting resource path files from the context of a scala macro

I would like to make a macro that validates at compile time the existence of resources in other projects. Is it possible to get this information from the context?

  def example_impl(c: Context): c.Expr[Unit] = {
    import c.universe._
    //instead of  
    // val r = this.getClass().getResource("/file.txt")

    val r = c.somthing.getResource("/file.txt")
    //...
  }

It may not be possible. But if it is, I'd like to know how to do it.

by user833970 at April 26, 2015 02:44 PM

Real difference between curly braces and parenthesis in scala

After using Scala for a while and reading about all over the place and especially here

I was sure I know when to use curlies. as a rule of thumb if I want to pass a block of code to be executed i will use curly braces.

how ever this nasty bug surfaced using elastic4s DSL using curly braces:

bool {
  should {
    matchQuery("title", title)
  }
  must {
    termQuery("tags", category)
  }
}

compiles to:

{
  "bool" : {
    "must" : {
      "term" : {
        "tags" : "tech"
      }
    }
  }
}

while using parenthesis:

bool {
       should (
         matchQuery("title", title)
        ) must (
         termQuery("tags", category)
        )
      }

gives the correct result:

{
  "bool" : {
    "must" : {
      "term" : {
        "tags" : "tech"
      }
    },
    "should" : {
      "match" : {
        "title" : {
          "query" : "fake",
          "type" : "boolean"
        }
      }
    }
  }
}

This was compiled using scala 2.11.6 - Even more confusing is that evaluating the expression in intellij debugger gives correct result no matter what I am using.

I noticed only the last expression was being evaluated why is that?

by raam86 at April 26, 2015 02:38 PM

CompsciOverflow

Universal Turing machine

I'm trying to find the answers of two questions about the Universal Turing machine.

1.How can the Universal Turing machine simulate a Turing machine if the one that is being simulated has a bigger number of states?

2.How can the Universal Turing machine simulate a Turing machine if the one that is being simulated has a bigger number of alphabet characters?

Can anyone help me with these questions?

Thank you!

by Panarit at April 26, 2015 02:38 PM

StackOverflow

Scala: how to switch from java ArrayList to Scala List [on hold]

Update: I have attempted to make my question clearer. So, I have been refactoring legacy code in Java by rewriting it in Scala from its original Java version. The two methods you see in the case class below, is the unfortunate result of me trying to translate a java mutable field into a Scala var and rewriting the setter and getter. Of course when I refactored I ended up with a Java Array. I dont want to necessarily use a java array when I have a Scala alternative. This is the kind of thing I want my code to use.

And the below code is just that. Okay, I created a case class, and now I want a way to pass around this "case class EmailMessage" by updating its state in a separate copy of it, bearing th same name. Thereby I want to avoud having to mutate its state, like what the setter and getter would do.

That is how I would like to use the below case class. By writing this case class I have attempted to move my code to a more idiomatic scala.

Here is the Updated code:

case class EmailMessage (
        toNames: List[String] = Nil

   )  {

     def setToName(tonames: Array[String]): EmailMessage = {
          this.toNames = new ArrayList[String](Arrays.asList(tonames:_*))
          this
        }

        def getToNames(): Array[String] = {
          this.toname.toArray(Array.ofDim[String](this.toname.size))
        } 

    }

Keeping in mind the above provided context, I hope I have tried my best in elevating my question to a more reasonable one that people can understand. Thanks

by user3825558 at April 26, 2015 02:31 PM

QuantOverflow

Lease Accounting / FX Embedded Derivatives

I have a lease agreement where the functional currency is USD, domestic currency is UAH. Lease agreement is written in EUR (rent rate) and payments are to be done in UAH in the amount of rent rate (EUR) * UAH/EUR exchange rate. Should I account it as an embedded derivative and value separately?

The same question applies to the following situation I have a lease agreement where the functional currency is USD, domestic currency is UAH. Lease agreement is written in USD (rent rate) and payments are to be done in UAH in the amount of rent rate (EUR) * UAH/USD exchange rate. Should I account it as an embedded derivative and value separately?

Thank you in advance

by Dasha Sladkova at April 26, 2015 02:19 PM

StackOverflow

Get words and values Between Parentheses in Scala-Spark

here is my data :

doc1: (Does,1) (just,-1) (what,0) (was,1) (needed,1) (to,0) (charge,1) (the,0) (Macbook,1)
doc2: (Pro,1) (G4,-1) (13inch,0) (laptop,1)
doc3: (Only,1) (beef,0) (was,1) (it,0) (no,-1) (longer,0) (lights,-1) (up,0) (the,-1)
etc...

and i want to extract words and values and then store them in two separated matrices , matrix_1 is (docID words) and matrix_2 is (docID values) ;

by Esmaeil zahedi at April 26, 2015 02:01 PM

How do I use core.match in Clojurescript with goog.events.KeyCodes?

(defn editing-mode? []
  "a hardcoded (for the moment) value, will look up in db later"
  false)

(def UP 38) ;; goog.events.KeyCodes.UP
(def DOWN 40) ;; goog.events.KeyCodes.DOWN
(def LEFT 37) ;; goog.events.KeyCodes.LEFT
(def RIGHT 39) ;; goog.events.KeyCodes.RIGHT
(def W 87) ;; goog.events.KeyCodes.W
(def S 83) ;; goog.events.KeyCodes.S
(def A 65) ;; goog.events.KeyCodes.A
(def D 68) ;; goog.events.KeyCodes.D
(def E 69) ;; goog.events.KeyCodes.E
(def ESC 27) ;; goog.events.KeyCodes.ESC

(defn delta [e]
  ;; e is a google closure Event
  (js/console.log (.-keyCode e))
  (js/console.log (editing-mode?))
  (match [(editing-mode?) (.-keyCode e)]
   [false 38] [:slide :up]
   [false 40] [:slide :down]
   [false 37] [:slide :left]
   [false 39] [:slide :right]
   [false 87] [:slide :up]
   [false 83] [:slide :down]
   [false 65] [:slide :left]
   [false 68] [:slide :right]
   [false 69] [:start-editing]
   [true 27]  [:done-editing]
   :else nil))

The above code works. However, If I try to be a little less wordy and use the goog keycodes directly, like so

(match [(editing-mode?) (.-keyCode e)]
  [false goog.events.KeyCodes.UP] [:slide :up]
  [false goog.events.keyCodes.DOWN] [:slide :down]
  ...

I get the following cljsbuild error:

...
Caused by: clojure.lang.ExceptionInfo: Invalid local name: goog.events.KeyCodes.UP ...
...

Ok, so I can't use the goog.events.KeyCodes.* themselves, but maybe I can use a def referenced to them? So I try

(match [(editing-mode?) (.-keyCode e)]
   [false UP] [:slide :up]
   [false DOWN] [:slide :down]
   ...

This does compile, but now match just isn't working. Every key event matches to the [false UP] match clause (core.match always emits [:slide :up]).

Anyway, the first code example does work. But why can't I use goog.events.KeyCodes.* or references to goog.events.KeyCodes.* in my core.match matcher? Is there something I am missing?

by Stephen Cagle at April 26, 2015 01:50 PM

DragonFly BSD Digest

Lazy Reading for 2015/04/26

We’re already 2/3 of the way to Christmas!

Your unrelated tea links of the week: Do you even steep?  The actual title is different, but I like that part of the link more.  (Thanks, Jeff Ramnani)  Also: Tea With Strangers.  It’s exactly what it sounds like.  Unfortunately, it’s not in my city.  (via)

by Justin Sherrill at April 26, 2015 01:46 PM

StackOverflow

Can't get path-dependent types to work in scala enumerations

I'm trying to spin my head around the path-dependent types in Scala's enums while making a Reads/Writes for Play2. Here is the code I have so far, it works, but with an asInstanceOf:

implicit def enumerationReads[T <: Enumeration](implicit t: T): Reads[t.Value] = {
    val validationError = ValidationError("error.expected.enum.name", t.values.mkString(", "))
    Reads.of[String].filter(validationError)(s ⇒ t.values.exists(v ⇒ v.toString == s)).map(t.withName(_))
  }

implicit def enumerationValueSetReads[T <: Enumeration](implicit t: T): Reads[t.ValueSet] =
    Reads.seq(enumerationReads[T]).map(seq ⇒ t.ValueSet(seq.asInstanceOf[Seq[t.Value]]: _*))

What can I do to get rid of the asInstanceOf on the last line? I tried typing the enumerationReads as enumerationReads[t.Value], but that doesn't work, the compiler complains in the argument of t.ValueSet that Seq[t.Value] cannot be cast to Seq[t.Value]. Yes, that didn't make sense to me too, until I started to realize these different t's might actually be different, since they are used in a closure.

So, what to do to make my code super-duper asInstanceOf free?

by Dibbeke at April 26, 2015 01:43 PM

/r/types

StackOverflow

Make Compile Fail on Non-Exhaustive Match in SBT

Let's say that I have a trait, Parent, with one child, Child.

scala> sealed trait Parent
defined trait Parent

scala> case object Boy extends Parent
defined module Boy

I write a function that pattern matches on the sealed trait. My f function is total since there's only a single Parent instance.

scala> def f(p: Parent): Boolean = p match { 
     |   case Boy => true
     | }
f: (p: Parent)Boolean

Then, 2 months later, I decide to add a Girl child of Parent.

scala> case object Girl extends Parent
defined module Girl

And then re-write the f method since we're using REPL.

scala> def f(p: Parent): Boolean = p match { 
     |   case Boy => true
     | }
<console>:10: warning: match may not be exhaustive.
It would fail on the following input: Girl
       def f(p: Parent): Boolean = p match { 
                                   ^
f: (p: Parent)Boolean

If I were to encounter a non-exhaustive match, then I'd get a compile-time warning (as we see here).

However, how can I make the compilation fail on a non-exhaustive match?

by Kevin Meredith at April 26, 2015 01:30 PM

Troubles with sbt compiling offline using org.apache.hadoop/* dependencies

got a lot of troubles compiling offline with sbt having dependencies on org.apache.hadoop packages.

A simple build.sbt:

name := "Test"

version := "1.0"

scalaVersion := "2.10.4"

libraryDependencies += "org.apache.hadoop" % "hadoop-yarn-api" % "2.2.0"

works fine while online but gives following error when running offline, while the package is present in the ivy cache (under ~/ivy2/cache/org.apache.hadoop/...):

[info] Loading project definition from /home/martin/Dev/S/project
[info] Set current project to Test (in build file:/home/martin/Dev/S/)
[info] Updating {file:/home/martin/Dev/S/}s...
[info] Resolving org.apache.hadoop#hadoop-yarn-api;2.2.0 ...
[warn] Host repo1.maven.org not found. url=https://repo1.maven.org/maven2/org/apache/hadoop/hadoop-yarn-api/2.2.0/hadoop-yarn-api-2.2.0.pom
[info] You probably access the destination server through a proxy server that is not well configured.
[warn]  module not found: org.apache.hadoop#hadoop-yarn-api;2.2.0
[warn] ==== local: tried
[warn]   /home/martin/.ivy2/local/org.apache.hadoop/hadoop-yarn-api/2.2.0/ivys/ivy.xml
[warn] ==== public: tried
[warn]   https://repo1.maven.org/maven2/org/apache/hadoop/hadoop-yarn-api/2.2.0/hadoop-yarn-api-2.2.0.pom
[info] Resolving org.fusesource.jansi#jansi;1.4 ...
[warn]  ::::::::::::::::::::::::::::::::::::::::::::::
[warn]  ::          UNRESOLVED DEPENDENCIES         ::
[warn]  ::::::::::::::::::::::::::::::::::::::::::::::
[warn]  :: org.apache.hadoop#hadoop-yarn-api;2.2.0: not found
[warn]  ::::::::::::::::::::::::::::::::::::::::::::::
[warn] 
[warn]  Note: Unresolved dependencies path:
[warn]      org.apache.hadoop:hadoop-yarn-api:2.2.0 (/home/martin/Dev/S/build.sbt#L15-16)
[warn]        +- test:test_2.10:1.0
sbt.ResolveException: unresolved dependency: org.apache.hadoop#hadoop-yarn-api;2.2.0: not found
    ...
[error] (*:update) sbt.ResolveException: unresolved dependency: org.apache.hadoop#hadoop-yarn-api;2.2.0: not found
[error] Total time: 3 s, completed Apr 26, 2015 2:46:58 PM

Adding the following resolver didn't help:

resolvers += Resolver.file("Local repo", file(System.getProperty("user.home") + "/.ivy2/cache")) (Resolver.ivyStylePatterns)

It just adds

[warn] ==== Local repo: tried
[warn]   /home/martin/.ivy2/cache/org.apache.hadoop/hadoop-yarn-api/2.2.0/ivys/ivy.xml

The files are present but named ivy-2.2.0.xml, not 2.2.0/ivys/ivy.xml

So i tried adding

resolvers += Resolver.file("Local repo 2", file(System.getProperty("user.home") + "/.ivy2/cache")) ( Patterns("[organisation]/[module]/[artifact]-[revision].[ext]") )

To force the naming convention but it then looks under

[warn] ==== Local repo 2: tried
[warn]   /home/martin/.ivy2/cache/org/apache/hadoop/hadoop-yarn-api/ivy-2.2.0.xml

even when according to the sbt doc [organisation] should be org.apache.hadoop and not org/apache/hadoop

So finally as a last resort i added an ugly

resolvers += Resolver.file("Local hadoop cache", file(System.getProperty("user.home") + "/.ivy2/cache")) ( Patterns("org.apache.hadoop/[module]/[artifact]-[revision].[ext]") )

and there it found something, but still was not happy with it:

[info] Loading project definition from /home/martin/Dev/S/project
[info] Set current project to Test (in build file:/home/martin/Dev/S/)
[info] Updating {file:/home/martin/Dev/S/}s...
[info] Resolving org.apache.hadoop#hadoop-yarn-api;2.2.0 ...
[warn] Host repo1.maven.org not found. url=https://repo1.maven.org/maven2/org/apache/hadoop/hadoop-yarn-api/2.2.0/hadoop-yarn-api-2.2.0.pom
[info] You probably access the destination server through a proxy server that is not well configured.
[warn] xml parsing: ivy-2.2.0.xml.original:18:69: schema_reference.4: Failed to read schema document 'http://maven.apache.org/xsd/maven-4.0.0.xsd', because 1) could not find the document; 2) the document could not be read; 3) the root element of the document is not <xsd:schema>.
[warn] xml parsing: ivy-2.2.0.xml.original:19:11: schema_reference.4: Failed to read schema document 'http://maven.apache.org/xsd/maven-4.0.0.xsd', because 1) could not find the document; 2) the document could not be read; 3) the root element of the document is not <xsd:schema>.
[warn] xml parsing: ivy-2.2.0.xml.original:20:17: schema_reference.4: Failed to read schema document 'http://maven.apache.org/xsd/maven-4.0.0.xsd', because 1) could not find the document; 2) the document could not be read; 3) the root element of the document is not <xsd:schema>.
[warn] xml parsing: ivy-2.2.0.xml.original:21:14: schema_reference.4: Failed to read schema document 'http://maven.apache.org/xsd/maven-4.0.0.xsd', because 1) could not find the document; 2) the document could not be read; 3) the root element of the document is not <xsd:schema>.
[warn] xml parsing: ivy-2.2.0.xml.original:22:14: schema_reference.4: Failed to read schema document 'http://maven.apache.org/xsd/maven-4.0.0.xsd', because 1) could not find the document; 2) the document could not be read; 3) the root element of the document is not <xsd:schema>.
[warn] xml parsing: ivy-2.2.0.xml.original:24:17: schema_reference.4: Failed to read schema document 'http://maven.apache.org/xsd/maven-4.0.0.xsd', because 1) could not find the document; 2) the document could not be read; 3) the root element of the document is not <xsd:schema>.
[warn] xml parsing: ivy-2.2.0.xml.original:25:12: schema_reference.4: Failed to read schema document 'http://maven.apache.org/xsd/maven-4.0.0.xsd', because 1) could not find the document; 2) the document could not be read; 3) the root element of the document is not <xsd:schema>.
...

I'm out of clues about what to to try next. Building offline works fine for any other dependency i tried if i copy the .ivy2/cache/ directories to the offline machine. It's just a bunch of org.apache.hadoop dependencies that cause this problem. The structure and the files under .ivy2/cache/org.apache.hadoop look the same as the ones from the other dependencies that work well.

Adding

offline := true

didn't help either.

Using sbt 0.13.7

Thanks !

by Martin at April 26, 2015 01:28 PM

Fred Wilson

The Lesson Of Title II and Time Warner Cable: Markets Have Two Sides

On thursday of this past week, I attended a small gathering of academics and policy makers who follow the technology sector. During that gathering, the news came out that the Comcast acquisition of Time Warner Cable was falling apart due to regulatory opposition. The conversation turned to the reasons why this happened.

I surmised that the reason for both the failure of Comcast/Time Warner Cable and the success of the Title II debate several months ago is that regulators and policy makers now understand that markets have two sides and you can’t just look at the consumer facing side of a market.

Comcast was correct in its assertion that they have very little customer overlap with Time Warner Cable and therefore consumers were not being harmed by the consolidation of the two networks. But if you look on the other side of their networks, to the suppliers of applications (Amazon, Google, Facebook, etc) and content (Time Warner, News Corp, Netflix) you see that the consolidation was going to be very harmful. Netflix was going to have one company standing between it and possibly half of its customers in the US. Same with Facebook. And there is no way that was going to be good for them. They may not have come out publicly in opposition to the merger, but you can bet that they came out in opposition privately.

The same is true of Title II regulation of the last mile Internet access. This was not a consumer story either. Very few advocates of “net neutrality rules” believe that this is a consumer issue. Very few have advocated that Internet access prices should be regulated. The debate has always been about the supply side of the market. The side where applications and content live. And the decision to apply Title II regulation to last mile Internet access was essentially a recognition that both sides of a network matter and that it is bad for the economy, society, and innovation to have a network attain enough market power to control what happens on the supply side of a market.

I don’t know enough about communications policy and antitrust policy history to know whether the two sided market construct has played an important role in the past. I think it may well have been an important factor in breakup of AT&T’s monopoly on wired telephony. And I expect there have been other examples as well.

But the one two punch of Title II and Comcast/TWC is a reminder that both sides of a market matter and competition (or the lack thereof) will have an important impact on how these markets function. I am a fan of both decisions and believe that our regulators and policy makers are thinking about this stuff correctly.

by Fred Wilson at April 26, 2015 01:25 PM

CompsciOverflow

Analysis of sorting Algorithm with probably wrong comparator?

It is an interesting question from an Interview, I failed it.

An array has n different elements [A1 .. A2 .... An](random order).

We have a comparator C, but it has a probability p to return correct results.

Now we use C to implement sorting algorithm (any kind, bubble, quick etc..)

After sorting we have [Ai1, Ai2, ..., Ain] (It could be wrong)

Now given a number m (m < n), the question is as follows:

  1. What is Expectation of size S of Intersection between {A1, A2, ..., Am} and {Ai1, Ai2, ..., Aim}, in other words, what is E[S]?

  2. Any relationship among m, n and p ?

  3. If we use different sorting algorithm, how will E[S] change ?

My idea is as follows:

  1. When m=n, E[S] = n, surely
  2. When m=n-1, E[S] = n-1+P(An in Ain)

I dont know how to complete the answer but I thought it could be solved through induction.. Any simulation methods would also be fine I think.

by GeekCat at April 26, 2015 01:11 PM

/r/netsec

StackOverflow

Idiomatic Scala for Options in place of if/else/else chain

I often find myself writing Scala of the form:

def foo = {
  f1() match {
    case Some(x1) => x1
    case _ =>
      f2() match {
        case Some(x2) => x2
        case _ =>
          f3() match {
            case Some(x3) => x3
            case _ =>
              f4()
          }
      }
  }
}

This is the moral equivalent of Java's

Object foo() {
    Object result = f1();
    if (result != null) {
        return result;
    } else {
        result = f2();
        if (result != null) {
            return result;
        } else {
            result = f3();
            if (result != null) {
                return result;
            } else {
                return f4();
            }
        }
    }
}

and it seems ugly and unnecessarily verbose. I feel like there should be a readable way to do this in one line of Scala, but it's not clear to me what it is.

Note: I looked at Idiomatic Scala for Nested Options but it's a somewhat different case.

by David Moles at April 26, 2015 12:53 PM

Isn't this a double traversal?

In the "programming tips" section of the haskell wiki, I found this example:

count :: (a -> Bool) -> [a] -> Int
count p = length . filter p

This was said to be a better alternative to

count :: (a -> Bool) -> [a] -> Int
count _ [] = 0
count p (x:xs)
   | p x       = 1 + count p xs
   | otherwise =     count p xs

Which, in terms of readability, I entirely agree with.

However, isn't that a double traversal, and therefore actually worse than the explicit-recursion function? Does laziness in GHC mean that this is equivalent to a single traverse after optimisation? Which implementation is faster, and why?

by AJFarmar at April 26, 2015 12:42 PM

CompsciOverflow

Weighted, Acyclic Graph and Change Weights Problem?

I ran into a question as follows:

We have a Code on Weighted, Acyclic Graph G(V, E) with positive and negative edges. we change the weight of this graph with following code, to give a G without negative edge (G'). if V={1,2...,n} and G_ij be a weight of edge i to edge j.

Change_weight(G) 
 for i=i to n   
   for j=1 to n
      c_i=min c_ij for all j
      if c_i < 0 
          c_ij = c_ij-c_i  for all j
          c_ki = c_ki+c_i  for all k

We have two axioms:

1) the shortest path between every two vertex in G is the same as G'.

2) the length of shortest path between every two vertex in G is the same as G'.

We want to verify these two sentence. which one is True and Which one is false. Who can add some hint why these are true or false?

My Solution:

I think two is false as following counter example, the original graph is given in left, and after the algorithm is run, the result is in right the shortest path between 1 to 3 changed, it passed from vertex 2 but after the algorithm is run it never passed from vertex 2.

by Mio Unio at April 26, 2015 12:34 PM

/r/compsci

Parity bit question.

hi guys, so just a quick question, if let say i am transmitting an even parity, and my p0 which covers all the bit are even, does that mean the error cannot be found since p0 checks all data bits to be correct?

submitted by nnkc911
[link] [1 comment]

April 26, 2015 12:27 PM

QuantOverflow

Proving there exists no arbitrage opportunities given 3 states and 2 assets

Assume there are 3 states of the world: w1, w2, and w3. Assume there are two assets: a risk-free asset returning Rf in each state, and a risky asset with Return R1 in state w1, R2 in state w2, and R3 in state W3. Assume the probabilities are 1/4 for state w1, 1/2 for state w2, and 1/4 for state w3. Assume Rf=1.0 and R1= 1.1, R2=1.0 and R3= 0.9.

(a) Prove that there are no arbitrage opportunities. (b) Describe the one-dimensional family of state price vectors (q1,q2,q3)>

For (a), I believe this is equivalent to showing there exists a state price vector.

I know p=Xq, but since we are only given two assets X doesn't have an inverse so I don't know how to compute q. Further, we are not given p. How do I show a state price vector exists?

by user2034 at April 26, 2015 12:19 PM

/r/netsec

StackOverflow

Ansible windows client or host with Ansible linux server? Possible?

I am using Ansible for some infrastructure management problem for my project. I achieved this task using a Linux client like say to copy a bin file from Ansible server and install it on a client machine. This involves tasks in my playbooks using normal Linux commands like ssh, scp, ./bin etc.,

Now I want to achieve the same in a windows client. I couldn't find any good documentation to try it out. If anyone of you have tried using Ansible with Windows client then it would be great if you could share the procedures or prototype or any piece of information to start with and progress further on my problem.

by Googler at April 26, 2015 12:14 PM

/r/compsci

Simple Device Driver implementation in Linux

Hey guys. I'm new this to this subreddit. I am stuck with this device driver implementation: Write a device driver to read the mouse events (left click and right click) and adjust the brightness of the screen. If a user presses left mouse button, then the brightness of the screen should decrease and if user presses right mouse button, then the brightness of the screen should increase. Any help will be of use. But ofcourse if somebody can give me a running code, I'll take it :)

submitted by diabloallica
[link] [comment]

April 26, 2015 11:52 AM

Lobsters

KeyBox: Web-Based SSH Access and Key Management

Web-based administration is combined with management and distribution of user’s public SSH keys.

Comments

by skavanagh at April 26, 2015 11:49 AM

QuantOverflow

Braess's paradox in quantitative finance: When optionality leads to lower value...?

One of the standard tenets of quantitative finance is that options should have an intrinsic value because optionality as such (in the sense of having more choices) should bring about value.

This seems to make sense intuitively - yet intuition can sometimes be misleading as we all know: Braess's paradox is called a paradox because here additional choices (i.e. options) can lead to worse overall performance, i.e. reducing the value for all participants.

My question
Are you aware of (theoretic or special) situations where additional optionality in instruments of quantitative finance (e.g. some exotic options or in some pricing models) could lead to lower value?

by vonjd at April 26, 2015 11:32 AM

/r/emacs

Lobsters

Planet Clojure

Reductionem ad finem

An article by Kevin Downey highlighting what some under utilized capabilities of Clojure’s reduce

by Andrew Brehaut at April 26, 2015 10:56 AM

CompsciOverflow

What are the k characters which make the most complete words?

Given a word list of $N$ words formed from a language of $M$ characters, where each word is composed of $n \geq 1$ not necessarily distinct characters, how can I find the best set of $k<M$ characters to learn, where "best" is defined as the set of $k$ with which the most complete words can be spelled. Ie, if I know these $k$ characters I can spell $N_k$ words. What is the maximum $N_k$ for every $k$ and which characters should I choose?

Checking a given set of $k$ words is equivalent to a Scrabble problem, but searching the space becomes very hard very fast. This problem is of limited interest in English where $M=26$ but is more important in Chinese or Japanese where $M \sim 10^3$.

I thought of considering the bipartite graph of characters and the words that they make but I'm not sure what the best search strategy is. I'm somewhat pessimistic that this problem is strictly solvable and therefore I am willing to try heuristic or stochastic methods.

by mmdanziger at April 26, 2015 10:46 AM

Lobsters

StackOverflow

Spray Routing template not working

I try to run spray template at https://github.com/spray/spray-template.

I get error at step 5 (Start the application)

[ERROR] [04/26/2015 12:49:18.613] [on-spray-can-akka.actor.default-dispatcher-4] [akka://on-spray-can/user/IO-HTTP/listener-0] 762
akka.actor.ActorInitializationException: exception during creation
    at akka.actor.ActorInitializationException$.apply(Actor.scala:164)
    at akka.actor.ActorCell.create(ActorCell.scala:596)
    at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:456)
    at akka.actor.ActorCell.systemInvoke(ActorCell.scala:478)
    at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:279)
    at akka.dispatch.Mailbox.run(Mailbox.scala:220)
    at akka.dispatch.Mailbox.exec(Mailbox.scala:231)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 762
    at spray.can.parsing.HttpHeaderParser.insert(HttpHeaderParser.scala:231)
    at spray.can.parsing.HttpHeaderParser$.insertInGoodOrder$1(HttpHeaderParser.scala:422)
    at spray.can.parsing.HttpHeaderParser$.apply(HttpHeaderParser.scala:429)
    at spray.can.parsing.HttpRequestPartParser$.$lessinit$greater$default$3(HttpRequestPartParser.scala:28)
    at spray.can.server.RequestParsing$.apply(RequestParsing.scala:36)
    at spray.can.server.HttpServerConnection$.pipelineStage(HttpServerConnection.scala:217)
    at spray.can.server.HttpListener.<init>(HttpListener.scala:36)
    at spray.can.HttpManager.newHttpListener(HttpManager.scala:84)
    at spray.can.HttpManager$$anonfun$receive$1$$anonfun$applyOrElse$1.apply(HttpManager.scala:76)
    at spray.can.HttpManager$$anonfun$receive$1$$anonfun$applyOrElse$1.apply(HttpManager.scala:76)
    at akka.actor.TypedCreatorFunctionConsumer.produce(Props.scala:343)
    at akka.actor.Props.newActor(Props.scala:252)
    at akka.actor.ActorCell.newActor(ActorCell.scala:552)
    at akka.actor.ActorCell.create(ActorCell.scala:578)
    ... 9 more

Java Version

java version "1.7.0_80"
Java(TM) SE Runtime Environment (build 1.7.0_80-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.80-b11, mixed mode)

by anilkan at April 26, 2015 10:20 AM

How to stop Eclipse / Scala IDE from uploading all my projects at once?

[EDIT]: I dont have a git console installed and therefore only use the git-plugin for eclipse.

I have several projects in my Eclipse / Scala IDE like this:

  • project 1
  • project 2
  • project 3
  • ...

It can happen that I work on i.e. project 1 and do some changes. After that I work on project 2 and also create changes. BUT project 2 is still faulty, as I have not finished changing all the code and want to go to bed or something like this.

Now I click on project 1 -> team -> commit ... and down in the file-window everything I have done pops up (project 1 - which I want to upload, and project 2 - which is faulty and which I dont want to upload).

I am aware that there is a filter, where you can type in text and by this only upload the stuff from project 1, but I have to type everytime. What if I forget to use the filter? I upload faulty code!

Also the filter is very primitive, as I cant even save templates to later just click on, I have to type the correct(!) filter-text everytime I want to commit.

So is there a way to only like click on a project and just upload that? Or keep other projects from being uploaded until they are ready?

by hamena314 at April 26, 2015 10:12 AM

DataTau

/r/emacs

Can't make up my mind regarding Evil/Spacemacs

Anyone want to decide for me? I'm an Emacs beginner (few months), i don't know Vim, the organised bindings really make sense.. but i feel i wouldn't really be learning Emacs. Sorry just dithering really but not getting much done at the moment as i keep switching.

submitted by jibbit
[link] [17 comments]

April 26, 2015 09:54 AM

CompsciOverflow

NP-Completeness - Reducing CLIQUE

Given a graph $G$, and integers $c$ and $k$, a group $X$ is a set of nodes $v_1, v_2, \dots, v_{|X|}$ that each have degree at least $c$ and that form a complete subgraph of $G$.

Following decision problem can be formulated:

Is there a group of size $k$ of these kind of nodes in $G$?

Problem:

Show that this problem is NP-Complete by reducing CLIQUE to this problem.

I am well aware of that to show NP-Completeness that you need to be able to verify a solution in polynomial time for the given problem, and then also show NP-Hard. Currently I have solved the first part (showing NP) where the only things needed to be verified are:

  • Each node in $X$ has at least $c$ edges
  • There are exactly $k$ nodes with $c$ edges

This is easily verified in polynomial time.

To the second part I am not very sure how to approach. Please give any suggestions! As stated I am supposed to reduce the known NP-complete problem CLIQUE and I know that the CLIQUE decision problem is as followed:

Are there $c$ nodes in $G$ such that there exist a complete connection?

by blackravager at April 26, 2015 09:32 AM

StackOverflow

standalone spark: worker didn't show up

I have 2 question want to know:

This is my code:

object Hi {
  def  main (args: Array[String]) {
    println("Sucess")
    val conf = new SparkConf().setAppName("HI").setMaster("local")
    val sc = new SparkContext(conf)
    val textFile = sc.textFile("src/main/scala/source.txt")
    val rows = textFile.map { line =>
      val fields = line.split("::")
      (fields(0), fields(1).toInt)
    }
    val x = rows.map{case (range , ratednum) => range}.collect.mkString("::")
    val y = rows.map{case (range , ratednum) => ratednum}.collect.mkString("::")
    println(x)
    println(y)
    println("Sucess2")

  }
}

Here is some of resault :

15/04/26 16:49:57 INFO Utils: Successfully started service 'SparkUI' on port 4040.
15/04/26 16:49:57 INFO SparkUI: Started SparkUI at http://192.168.1.105:4040
15/04/26 16:49:57 INFO Executor: Starting executor ID <driver> on host localhost
15/04/26 16:49:57 INFO AkkaUtils: Connecting to HeartbeatReceiver: akka.tcp://sparkDriver@192.168.1.105:64952/user/HeartbeatReceiver
15/04/26 16:49:57 INFO NettyBlockTransferService: Server created on 64954
15/04/26 16:49:57 INFO BlockManagerMaster: Trying to register BlockManager
15/04/26 16:49:57 INFO BlockManagerMasterActor: Registering block manager localhost:64954 with 983.1 MB RAM, BlockManagerId(<driver>, localhost, 64954)
.....
15/04/26 16:49:59 INFO SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:839
15/04/26 16:49:59 INFO DAGScheduler: Submitting 1 missing tasks from Stage 1 (MapPartitionsRDD[4] at map at Hi.scala:25)
15/04/26 16:49:59 INFO TaskSchedulerImpl: Adding task set 1.0 with 1 tasks
15/04/26 16:49:59 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1, localhost, PROCESS_LOCAL, 1331 bytes)
15/04/26 16:49:59 INFO Executor: Running task 0.0 in stage 1.0 (TID 1)
15/04/26 16:49:59 INFO HadoopRDD: Input split: file:/Users/Winsome/IdeaProjects/untitled/src/main/scala/source.txt:0+23
15/04/26 16:49:59 INFO Executor: Finished task 0.0 in stage 1.0 (TID 1). 1787 bytes result sent to driver
15/04/26 16:49:59 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 1) in 13 ms on localhost (1/1)
15/04/26 16:49:59 INFO DAGScheduler: Stage 1 (collect at Hi.scala:25) finished in 0.013 s
15/04/26 16:49:59 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool 
15/04/26 16:49:59 INFO DAGScheduler: Job 1 finished: collect at Hi.scala:25, took 0.027784 s
1~1::2~2::3~3
10::20::30
Sucess2

My first question is : When I check http://localhost:8080/
There is no worker. and I can't open http://192.168.1.105:4040 too Is is because I use spark standalone?
How to fixed this??

(My environment is MAC,IDE is Intellij)

enter image description here

My 2nd question is:

    val x = rows.map{case (range , ratednum) => range}.collect.mkString("::")
    val y = rows.map{case (range , ratednum) => ratednum}.collect.mkString("::")
    println(x)
    println(y)

I thiink these code could be more easily to get x and y (something like this stuff :rows[range],rows[ratenum]),But I'm not familiar with scala . Could you give me some advice?

by user2492364 at April 26, 2015 09:19 AM

QuantOverflow

Technical analysis - Calculating Aroon Indicator Serie

I'm trying to build a class to create Aroon series. But it seems I don't understand the steps well. I'm not sure about what purpose I have to use the period parameter.

Here is my first attempt:

/// <summary>
/// Aroon
/// </summary>
public class Aroon : IndicatorCalculatorBase
{
    public override List<Ohlc> OhlcList { get; set; }
    public int Period { get; set; }

    public Aroon(int period) 
    {
        this.Period = period;
    }

    /// <summary>
    /// Aroon up: {((number of periods) - (number of periods since highest high)) / (number of periods)} x 100
    /// Aroon down: {((number of periods) - (number of periods since lowest low)) / (number of periods)} x 100
    /// </summary>
    /// <see cref="http://www.investopedia.com/ask/answers/112814/what-aroon-indicator-formula-and-how-indicator-calculated.asp"/>
    /// <returns></returns>
    public override IIndicatorSerie Calculate()
    {
        AroonSerie aroonSerie = new AroonSerie();

        int indexToProcess = 0;

        while (indexToProcess < this.OhlcList.Count)
        {
            List<Ohlc> tempOhlc = this.OhlcList.Skip(indexToProcess).Take(Period).ToList();
            indexToProcess += tempOhlc.Count;

            for (int i = 0; i < tempOhlc.Count; i++)
            {   
                int highestHighIndex = 0, lowestLowIndex = 0;
                double highestHigh = tempOhlc.Min(x => x.High), lowestLow = tempOhlc.Max(x => x.Low);
                for (int j = 0; j < i; j++)
                {
                    if (tempOhlc[j].High > highestHigh)
                    {
                        highestHighIndex = j;
                        highestHigh = tempOhlc[j].High;
                    }

                    if (tempOhlc[j].Low < lowestLow)
                    {
                        lowestLowIndex = j;
                        lowestLow = tempOhlc[j].Low;
                    }
                }

                int up = ((this.Period - (i - highestHighIndex)) / this.Period) * 100;
                aroonSerie.Up.Add(up);

                int down = ((this.Period - (i - lowestLowIndex)) / this.Period) * 100;
                aroonSerie.Down.Add(down);
            }
        }

        return aroonSerie;
    }
}

Is there anyone else tried to do that before? Here is a few reference points about the indicator:

by anilca at April 26, 2015 09:09 AM

/r/netsec

StackOverflow

SparkContext inside map

I have big list of folders ( 10.000 folders ) with .gz file(s) inside and try to do something per-folder basis, e.g. split each file(s) on smaller pieces.

To achieve this I decided :

  1. obtain list of folders paths as Array[String]
  2. parallelize this pretty big list to nodes
  3. foldersRDD.foreach(folderName => .... sc.textFile(folderName) ....

It works locally but on cluster lead to NullPointerException ( I guess, SparkContext is null for each executor node and we can't use it inside node's functions code at all ).
How I can redo this example in way that ensure 1-folder-per-single-worker execution mode or other way to avoid / minimize any heavy operations like shuffle ?

by Yuriy Perepelytsia at April 26, 2015 08:46 AM

/r/compsci

Pinhole Camera Problem!

I was reading slides of a university course and in one of the slides it says

1 Eyes/cameras can’t have VERY small holes because that limits the amount of entering light

2 and diffracts/bends the light

Bending of light(diffraction) only takes place when light goes near the edge of the surface of the object(e.g. water droplets in cloud) but in the case of the camera or eye light passes through the eye lens or camera lens(not near the surface of the lens), refraction takes place instead of diffraction in the case of eye/camera, so my question is how they are saying that if eyes/cameras have very small holes they will diffracts/bends the light?

submitted by Ganda2
[link] [2 comments]

April 26, 2015 08:13 AM

CompsciOverflow

k-center algorithm in one-dimensional space

I'm aware of the general k-center approximation algorithm, but my professor (this is a question from a CS class) says that in a one-dimensional space, the problem can be solved (optimal solution found, not an approximation) in O(n^2) polynomial time without depending on k or using dynamic programming.

A general description of the k-center problem: Given a set of nodes in an n-dimensional space, cluster them into k clusters such that the "radius" of each cluster (distance from furthest node to its center node) is minimized. A more formal and detailed description can be found at http://en.wikipedia.org/wiki/Metric_k-center

As you might expect, I can't figure out how this is possible. The part currently causing me problems is how the runtime can not rely on k.

The nature of the problem causes me to try to step through the nodes on a sort of number line and try to find points to put boundaries, marking off the edges of each cluster that way. But this would require a runtime based on k.

The O(n^2) runtime though makes me think it might involve filling out an nxn array with the distance between two nodes in each entry.

Any explanation on how this is works or tips on how to figure it out would be very helpful.

by philv at April 26, 2015 07:57 AM

StackOverflow

Is it possible to create circular references in Clojure?

Ignoring native interop and transients, is it possible to create any data structures in Clojure that contain direct circular references ?

It would seem that immutable data structures can only ever contain references to previous versions of themselves. Are there any Clojure APIs that could create a new data structure that has a reference to itself ?

Scheme has the letrec form which allows mutually recursive structures to be created - but, as far as I can tell, Clojure does not have anything similar.

This question is related to porting Clojure to iOS - which does not have garbage collection, but does have reference counting.

by Nick Main at April 26, 2015 07:56 AM

TheoryOverflow

What is contextual equivalence ignoring non-termination called?

Contextual equivalence ($M_1 \cong_{ctx} M_2$) is often defined as: $C[M_1] \Downarrow V \iff C[M_2] \Downarrow V$

Which is to say for any context $C$, $C[M_1]$ terminates with value $V$ iff $C[M_2]$ terminates with value $V$.

Is there a name for the weaker equivalence: $C[M_1] \Downarrow V_1 \wedge C[M_2] \Downarrow V_2 \Rightarrow V_1 = V_2$?

Which is to say $C[M_1]$ and $C[M_2]$ reduce to equal values iff they both terminate.

by Will at April 26, 2015 07:53 AM

UnixOverflow

Problem with pkg behind a chunking proxy

I have a freshly installed FreeBSD 10.1 at work in a VM. As it is behind a corporate proxy, I had to set HTTP_PROXY in environment and it began to run fine.

But no way to get pkg to work correctly. I installed it from ports, still same issue :

root@FriBi:~ # pkg update -f
Updating FreeBSD repository catalogue...
pkg: repository meta /var/db/pkg/FreeBSD.meta has wrong version or wrong format
Fetching meta.txz:   0%
pkg: No signature found
pkg: repository FreeBSD has no meta file, using default settings
Fetching packagesite.txz:   0%
pkg: No signature found
pkg: Unable to update repository FreeBSD
root@FriBi:~ #

I was finally able to by-pass the proxy ... and it immediately runned fine ... So I tried to analyze the HTTP dialog. The corporate proxy always sends its responses with Transfer-Encoding: chunked which I suspected to be the cause. I could even confirm it by using a minimal Python proxy that :

  • get the response from the corporate proxy with readall() buffering the whole file
  • send it back with a ContentLength header to its client (here pkg)

and then again it worked (I could do pkg install xorg ...)

My questions :

  • is pkg really hostile to chunking proxy or could it be a local problem of configuration ?
  • if it was, should it not be documented somewhere (could not find anything about that)
  • is there any simple and nice trick (an official proxy for example) to tranform a chunked HTTP response into a not chunked one ?

EDIT

The proposed patch has been integrated in 1.5 version and this problem is now solved by that version of pkgng.

by Serge Ballesta at April 26, 2015 07:47 AM

StackOverflow

Play-Framework: 2.3.x: play - Cannot invoke the action, eventually got an error: java.lang.IllegalArgumentException:

I am using play-framework 2.3.x with reactivemongo-extension JSON type. following is my code for fetch the data from db as below:

def getStoredAccessToken(authInfo: AuthInfo[User]) = {
println(">>>>>>>>>>>>>>>>>>>>>>: BEFORE"); //$doc("clientId" $eq authInfo.user.email, "userId" $eq authInfo.user._id.get)
var future = accessTokenService.findRandom(Json.obj("clientId" -> authInfo.user.email, "userId" -> authInfo.user._id.get));
println(">>>>>>>>>>>>>>>>>>>>>>: AFTER: "+future);
future.map { option => {
  println("*************************** ")
  println("***************************: "+option.isEmpty)
  if (!option.isEmpty){
   var accessToken = option.get;println(">>>>>>>>>>>>>>>>>>>>>>: BEFORE VALUE");
   var value = Crypto.validateToken(accessToken.createdAt.value)
   println(">>>>>>>>>>>>>>>>>>>>>>: "+value);
   Some(scalaoauth2.provider.AccessToken(accessToken.accessToken, accessToken.refreshToken, authInfo.scope, 
       Some(value), new Date(accessToken.createdAt.value)))
  }else{
    Option.empty
  }
}}

}

When i using BSONDao and BsonDocument for fetching the data, this code successfully run, but after converting to JSONDao i getting the following error:

Note: Some time this code will run but some it thrown an exception after converting to JSON

play - Cannot invoke the action, eventually got an error: java.lang.IllegalArgumentException: bound must be positive
application - 

Following are the logs of application full exception strack trace as below:

>>>>>>>>>>>>>>>>>>>>>>: BEFORE
>>>>>>>>>>>>>>>>>>>>>>: AFTER:   scala.concurrent.impl.Promise$DefaultPromise@7f4703e3
play - Cannot invoke the action, eventually got an error: java.lang.IllegalArgumentException: bound must be positive
application - 
! @6m1520jff - Internal server error, for (POST) [/oauth2/token] ->
play.api.Application$$anon$1: Execution exception[[IllegalArgumentException: bound must be positive]]
at play.api.Application$class.handleError(Application.scala:296) ~[play_2.11-2.3.8.jar:2.3.8]
at play.api.DefaultApplication.handleError(Application.scala:402) [play_2.11-2.3.8.jar:2.3.8]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$3$$anonfun$applyOrElse$4.apply(PlayDefaultUpstreamHandler.scala:320) [play_2.11-2.3.8.jar:2.3.8]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$3$$anonfun$applyOrElse$4.apply(PlayDefaultUpstreamHandler.scala:320) [play_2.11-2.3.8.jar:2.3.8]
at scala.Option.map(Option.scala:146) [scala-library-2.11.6.jar:na]
Caused by: java.lang.IllegalArgumentException: bound must be positive
at java.util.Random.nextInt(Random.java:388) ~[na:1.8.0_40]
at scala.util.Random.nextInt(Random.scala:66) ~[scala-library-2.11.6.jar:na]

by Harmeet Singh Taara at April 26, 2015 07:42 AM

TheoryOverflow

Pinhole Camera Problem! [on hold]

I was reading slides of a university course and in one of the slides it says

1 Eyes/cameras can’t have VERY small holes because that limits the amount of entering light 2 and diffracts/bends the light

Bending of light(diffraction) only takes place when light goes near the edge of the surface of the object(e.g. water droplets in cloud) but in the case of the camera or eye light passes through the eye lens or camera lens(not near the surface of the lens), refraction takes place instead of diffraction in the case of eye/camera, so my question is how they are saying that if eyes/cameras have very small holes they will diffracts/bends the light?

by Mr.Grey at April 26, 2015 07:24 AM

Is there a typed lambda calculus which is consistent and Turing complete?

Is there a typed lambda calculus where the corresponding logic under the Curry-Howard correspondence is consistent, and where there are typeable lambda expressions for every computable function?

This is admittedly an imprecise question, lacking a precise definition of "typed lambda calculus." I'm basically wondering whether there are either (a) known examples of this, or (b) known impossibility proofs for something in this area.

by Morgan Thomas at April 26, 2015 07:16 AM

/r/clojure

Help figuring out how to solve my problem with a macro

I'm writing a web application using Ring/Compojure and one of my routes looks like this:

(POST "/api/post" request (if-not (authenticated? request) {:status 403} (handle-create-post request))) 

I realized I want the authentication logic to be reproduced on several routes that the user needs to be authorized to access and thought a macro that looked like this:

(authroute (POST "/api/post" request (handle-create-post request))) 

would be nice. This doesn't work, but I'm trying to scrape together something like this:

 (defmacro authroute "Takes a Compojure route definition and returns a form that also checks if the user is authenticated and returns a 403 error if they are not." [expression] '(~@(butlast expression) (if-not (authenticated? ~(nth expression 2)) {:status 403} ~(last expression)))) 

What am I doing wrong here? Is this a good use of macros?

submitted by Tortoise_Face
[link] [10 comments]

April 26, 2015 06:59 AM

StackOverflow

Apache-Spark Internal Job Scheduling

I came across the feature in Spark where it allows you to schedule different tasks within a spark context.

I want to implement this feature in a program where I map my input RDD(from a text source) into a key value RDD [K,V] subsequently make a composite key valueRDD [(K1,K2),V] and a filtered RDD containing some specific values.

Further pipeline involves calling some statistical methods from MLlib on both the RDDs and a join operation followed by externalizing the result to disk.

I am trying to understand how will spark's internal fair scheduler handle these operations. I tried reading the job scheduling documentation but got more confused with the concept of pools, users and tasks.

What exactly are the pools, are they certain 'tasks' which can be grouped together or are they linux users pooled into a group

What are users in this context. Do they refer to threads? or is it something like SQL context queries ?

I guess it relates to how are tasks scheduled within a spark context. But reading the documentation makes it seem like we are dealing with multiple applications with different clients and user groups.

Can someone please clarify this?

by Arpit1286 at April 26, 2015 06:55 AM

CompsciOverflow

"Practical forms" of Chernoff bound for inequality in expectation

From Wikipedia:

The above formula is often unwieldy in practice, so the following looser but more convenient bounds are often used:

(i) $Pr(X\geq (1+\delta)\mu)\leq e^{-\frac{\delta^2\mu}{3}}, 0<\delta<1$

(ii) $Pr(X\leq (1-\delta)\mu)\leq e^{-\frac{\delta^2\mu}{2}}, 0<\delta<1$

The assumption they use is $E[X]=\mu$.

Would (i) still hold if we only assume $E[X]\leq \mu$? Would (ii) still hold if we only assume $E[X]\geq\mu$?

If not, what "practical forms" do we have in these cases?

by mba at April 26, 2015 06:48 AM

StackOverflow

How to import .txt file in Scala

Hi im new to programming, how do i import a .txt file? My code cant find the file, is there any specific directory it has to be put into?

My code:

object Zettel01 extends App {
import scala.io.Source

object Suchtest {
  val gesch = Source.fromFile("DieUnendlicheGeschichte.txt").getLines()
  for (w <- gesch) println(w)
}
}

I have tried different code but the problem is always the same, i cant find the .txt file...

Thanks in advance for any help Flurry1337

by flurry1337 at April 26, 2015 06:42 AM

How can I capture the standard output of clojure?

I have some printlns I need to capture from a Clojure program and I was wondering how I could capture the output?

I have tried:

(binding [a *out*]
    (println "h")
    a
)

: but this doesn't work

by Zubair at April 26, 2015 06:19 AM

CompsciOverflow

sliding window algorithm implementation

I am having trouble figuring how to derive the numbers to the solution to the question. I am following the steps, however my numbers do not come near that of the solution. Can someone give a concise step by step way to figuring out both problems.

enter image description here

Solution

enter image description here

by jssmkp at April 26, 2015 06:12 AM

/r/emacs

Lobsters

StackOverflow

Bizarre Swift Compiler Error: "Expression too complex" on a string concatenation

I find this amusing more than anything. I've fixed it, but I'm wondering about the cause. Here is the error: DataManager.swift:51:90: Expression was too complex to be solved in reasonable time; consider breaking up the expression into distinct sub-expressions. Why is it complaining? It seems like one of the most simple expressions possible.

The complier points to the columns + ");"; section

func createTableStatement(schema: [String]) -> String {

    var schema = schema;

    schema.append("id string");
    schema.append("created integer");
    schema.append("updated integer");
    schema.append("model blob");

    var columns: String = ",".join(schema);

    var statement = "create table if not exists " + self.tableName() + "(" + columns + ");";

    return(statement);
}

the fix is:

var statement = "create table if not exists " + self.tableName();
statement += "(" + columns + ");";

this also works (via @efischency) but I don't like it as much because I think the ( get lost:

var statement = "create table if not exists \(self.tableName()) (\(columns))"

by Kendrick Taylor at April 26, 2015 05:32 AM

Planet Theory

TR15-073 | Lower Bounds for Sums of Products of Low arity Polynomials | Neeraj Kayal, Chandan Saha

We prove an exponential lower bound for expressing a polynomial as a sum of product of low arity polynomials. Specifically, we show that for the iterated matrix multiplication polynomial, $IMM_{d, n}$ (corresponding to the product of $d$ matrices of size $n \times n$ each), any expression of the form $$ IMM_{d, n} = \sum_{i=1}^{s} \quad \prod_{j=1}^{m} \quad Q_{ij}, $$ where the $Q_{ij}$'s are of arity at most $t \leq \sqrt{d}$ (i.e. each $Q_{ij}$ depends on at most $t$ variables), the number of summands $s$ must be at least $d^{\Omega\left( \frac{d}{t} \right)}$. A special case of this problem where the $Q_{ij}$'s are further restricted to have degree at most one was posed as an open problem by Shpilka and Wigderson [SW99] and recently resolved in [KS15]. We show that a refinement of the argument in [KS15] yields the above-mentioned lower bound on s, regardless of the degrees of the $Q_{ij}$'s (and also regardless of $m$, the number of factors in each summand). Lower bounds for the same model were also obtained in an almost simultaneous but independent work by Kumar and Saraf [KS15b].

April 26, 2015 05:28 AM

StackOverflow

How do I get keepWhen behaviour in Elm 0.15?

The keepWhen function from earlier versions of Elm was removed. I have ported an Elm application from 0.14, but I'm stuck at trying to get one part of it to work, where it's using keepWhen.

So basically I'm looking for a function like

keepWhen : Signal Bool -> a -> Signal a -> Signal a

I have found

filter : (a -> Bool) -> a -> Signal a -> Signal a

but it's not quite the same thing, and I haven't figured out how to get it to work.

by kqr at April 26, 2015 05:20 AM

QuantOverflow

Confused on interpretation of betas/alphas in regression in finance

I ran a regression on two stocks. I don't have the data in front of me, but it is a more conceptual question.

Let's say SP500 returned a total 23% return over this time period and MSFT returned 9%. I ran the regression in R:

 summary(lm(MSFT~SP500,data=mydata))

The coefficients show an intercept of around .003 and coefficient of around 1.5 for the SP500. The beta is statistically significant at around the 99.9% confidence and the intercept is NOT - only around 38% interval.

Now my understanding of 'alpha' or the intercept - is that it is the 'excess return' that could be gained by investing in a strategy. I am confused how alpha could ever be positive if the Y value that you are comparing (MSFT) is less than the X (SP500). Here each 1% change in the Sp500 returned a 1.5% in microsoft, but to actually have a positive alpha - even a minuscule amount and even though its not statistically significant - is hurting my head.

If anyone could just explain the relationship of the intercept in a basic regression like this and practical relationship of the betas I would appreciate it. Would I ever get a positive alpha when MSFT's return is LESS than the SP500 and MSFT=Y, SP500=X?

by runningbirds at April 26, 2015 05:15 AM

StackOverflow

Solve Coin Change Problems with both functional programming and tail recursion?

I know how to solve Coin Change Problem with both imperative programming and DP.

I know how to solve Coin Change Problem with both FP and non-tail-recursion. But it will compute the same problem multiple times, which lead to inefficiency.

I know how to compute fibonacci number with both FP and DP/tail-recursion. Lots of articles use this example to explain "FP can be combined with DP" and "recursion can also be as efficient as loop".

But I don't know how to solve Coin Change Problem with both FP and DP/tail-recursion.

I think it's strange that articles on imperative programming will always mention the inefficiency of computing the same problem multiple times on Coin Change Problem, while those on FP just omit it.

In a more general sense, I wonder whether FP is powerful enough to solve such kind of "two dimensional" problem, while fibonacci is a "one dimensional" problem. Can anybody help me?

Coin Change Problem:

Given a value N, if we want to make change for N cents, and we have infinite supply of each of S = { S1, S2, .. , Sm} valued coins, how many ways can we make the change? The order of coins doesn’t matter.

For example, for N = 4 and S = {1,2,3}, there are four solutions: {1,1,1,1},{1,1,2},{2,2},{1,3}. So output should be 4. For N = 10 and S = {2, 5, 3, 6}, there are five solutions: {2,2,2,2,2}, {2,2,3,3}, {2,2,6}, {2,3,5} and {5,5}. So the output should be 5.

I meet this problem while I'm learning Scala, so use Scala to demonstrate the code if it's possible. Lisp will also be good. But I know little about Haskell.

by Warbean at April 26, 2015 04:48 AM

Deserializing to case classes in Scala script using Jackson

I have the following case classes in my Scala script.

case class Story(kind:String, id:String, created_at:String, updated_at:String, accepted_at:String, story_type:String, name:String, description:String, current_state:String, requested_by_id:Long, estimate:Option[Int], project_id:Long, url:String, owner_ids:List[Long], labels:List[Label], owned_by_id:Long)
case class Label(id:Long, project_id:Long, kind:String, name:String, created_at:String, updated_at:String)

This is the mapper configuration.

val mapper = new ObjectMapper() with ScalaObjectMapper
mapper.registerModule(DefaultScalaModule)

I am using Jackson Scala Module to deserialize response data from a REST api into these case classes. These work fine in a Scala file in a project. But when I try to use the same in a Scala script, they become anonymous inner classes as the whole script gets wrapped in an object.

Since Jackson does not deserialize inner classes, it throws this exception.

com.fasterxml.jackson.databind.JsonMappingException: Can not deserialize Class Main$$anon$2$JsonMapping$Story (of type local/anonymous) as a Bean

Here the case classes are inner classes, if only I could make them static, they would work with Jackson. But it doesn't seem to be possible in Scala.

Is there any workaround for this?

by Joseph Kiran at April 26, 2015 04:41 AM

For multiple generators to handle Seq

I'm a new to scala, and I want to unique a Seq[(Int,Int)] by the first component, my code as follow:

val seq = Seq((1,1), (0,1), (2,1), (0, 1), (3,1), (2,1))
val prev = -1
val uniqueSeq = for(tuple <- seq.sortBy(_._1) if !tuple._1.equals(prev); prev = tuple._1) yield tuple

but why the result is

uniqueSeq: Seq[(Int, Int)] = List((0,1), (0,1), (1,1), (2,1), (2,1), (3,1))

by ifloating at April 26, 2015 04:00 AM

Can I add class fields dynamically using macros?

I'm new to Scala macros, so sorry if this is an obvious question.

I wonder if the following is even possible before I dig in deeper.

Let's say I have a class named DynamicProperties

Is it possible to add members to the class based on something like this?

val x: DynamicProperties = ...
x.addProperty("foo", 1) 
x.addProperty("bar", true)
x.addProperty("baz", "yep")

and have it be translated somehow to a class that looks like this more or less?

class SomeName extends DynamicProperties { 
   val foo: Int = 1
   val bar: Boolean = true
   val baz: String: yep
}

I guess this can be done via reflection, but I want people who use this, to have auto complete when they type x. that will fit what they did earlier using the addProperty method. Is this possible using Scala marcos? I want to try to implement it, but it will be good to know if this is going down a dead end path or not.

by Eran Medan at April 26, 2015 03:34 AM

/r/scala

TheoryOverflow

NP Complete equivalent to finding the minimum subset of "target" vertices of a bipartite graph, to cover the maximum number of "source" vertices

I apologize for the wording of the question. I'm pretty new to theoretical CS; I've been a software engineer for most of my professional career, and I've just started a PhD program. Consider the following graph:

enter image description here

In this graph, each vertex that of the form $A_i$ represents an adversary's pure strategy. Every vertex that can be immediately reached from an $A_i$ vertex represents the set of targets attacked by the adversary in this strategy. So, for example, if you look at $A_1$, the targets attacked by the adversary are $\{o, b, a, h\}$.

If a defender places a resource on a target, then any pure strategy used by the adversary to attack that target, fails. So if the defender has $k$ resources, he needs to figure out how he can place them on up to $k$ targets so that he blocks a majority of the attacker's pure strategies.

The attacker's mixed-strategy is known to the defender, and so he knows the probability with which the attacker will play any one of his pure strategies. Each pure-strategy also has a payoff associated with it. If the attacker is successful when playing his strategy, he will receive a payoff corresponding to that strategy. The defender will receive a negative payoff of the same amount. Hence, the effective negative payoff for the defender for any attacker strategy is the probability of the attacker playing that strategy, multiplied by the payoff for playing that strategy. The LP representing this has a negative constant in the objective function that represents the defender's payoff. This negative constant is equal to the maximum payoff the attacker can expect given his mixed strategy. Therefore, the best that the defender can do is 0, and so the LP tries to find the targets to block so that the payoff for the defender is as close to zero as possible.

In the above picture, I was able to solve the linear program to find the optimal solution $\{h, p, j\}$. By placing resources on these targets, he is able to block the following adversary strategies: $\{A_0, A_1, A_2, A_3, A_6, A_7\}$.

The actual problem is an optimization problem, so the defender has to find up to $k$ resources that blocks all the attacker strategies to give him the maximum payoff.

I know there are a few NP complete problems on bipartite graphs, but I am having a hard time figuring out which one I can apply here; I'm very new to this.

by Vivin Paliath at April 26, 2015 02:52 AM

Lobsters

StackOverflow

how can access to the new javascript keyword from scala.js?

I am porting a JavaScript library to Scalajs. The JS Objects are created with the new keyword in the JavaScript side so this is what I do in most case.

trait Point extends js.Object {
  def normalize(length: Double): Point = js.native
}

This seems to work well however, I don't get the same easiness for the constructor.

@JSName("paper.Point")
object PointNative extends js.Object {
  def apply(props: RectProps): Rectangle = js.native
}

The above code does not work. I surely type checks and it compiles but at runtime it returns undefined.

If I modified PointNative like this then all is good.

import js.Dynamic.{ global => g, newInstance => jsnew }
object PointNative {
  def apply(props: RectProps): Rectangle = jsnew(g.paper.Point)(props).asInstanceOf[Point]
}

Is there a way to use @JSName and js.native with the new keyword?

Thanks!

by Yoel Garcia at April 26, 2015 02:48 AM

CompsciOverflow

Pinhole Camera Problem! [on hold]

I was reading slides of a university course and in one of the slides it says

1 Eyes/cameras can’t have VERY small holes because that limits the amount of entering light 2 and diffracts/bends the light

Bending of light(diffraction) only takes place when light goes near the edge of the surface of the object(e.g. water droplets in cloud) but in the case of the camera or eye light passes through the eye lens or camera lens(not around the surface of the lens), refraction takes place instead of diffraction, so my question is how they are saying that if eyes/cameras have very small holes they will diffracts/bends the light?

by Mr.Grey at April 26, 2015 02:35 AM

QuantOverflow

Binomial tree vs trinomial tree in pricing options

Very new to pricing models. Is there a general guideline when to use binomial tree and when trinomial tree is preferred? As far as I know, unlike binomial tree, trinomial tree only gives a range instead of a unique value. Thank you.

by user6396 at April 26, 2015 02:19 AM

Lobsters

StackOverflow

How to create a typeclass instance for any subclass of Traversable in Scala

I've created a toy example to illustrate a compiler error that I don't understand. Shouldn't the implicit conversion from C[_] <: Traversable[T] with Safe[T] to Safe[C[T]] apply?

import scala.language.{implicitConversions, higherKinds}

class ToyExample {

  implicit val stringsafe = new Safe[String] {
    override def stringify(value: String): String = value
  }

  def main(args: Array[String]): Unit = {
    val a: String = "c"
    val b: Seq[String] = Seq("1", "2", "3")
    safe(a)
    safe(b)  // why is this a compiler error?
  }

  def safe[T](value: T)(implicit safe: Safe[T]): String = safe stringify value
}

object Safe {

  implicit def safeTraversable[C[_] <: Traversable[T], T](implicit safe: Safe[T]): Safe[C[T]] =
    new Safe[C[T]] {
      override def stringify(value: C[T]): String = value.map(safe.stringify).toString()
    }
}

trait Safe[T] {

  def stringify(value: T): String
}

by Jeff May at April 26, 2015 02:08 AM

CompsciOverflow

Linearizability and Serializability in context of Software Transactional Memory

I've been trying to grasp serializability and linearizability in the context of software transactional memory. However, I think both notions can be applied to transactional memory in general.

At this point, the following is my understanding of both subjects.

Serializability

Serializability is a global property. It is a correctness property of transactions. Given k processes that each execute a transaction Tkconcurrently, serializability ensures that there is a sequence of transactions that can be executed in sequence (i.e., one after the other) such that the end result is the same as the transactions being executed concurrently. So there is a permutation of the list (T1, T2,..,Tk) that defines a list of transactions that can be executed sequentially.

This property makes perfect sense to me and I think my definition is correct. I based this definition on the text in "The Art of Multiprocessor programming" by Herlihy and Shavit.

Linearizability

Linearizability is a local property of concurrent objects (e.g., an instance of a class that is shared amongst threads). Linearizability ensures that when two processes each execute a series op method calls (e.g., queue or dequeue on a Queue instance) on that shared object there is a sequential ordering of these method calls that does not necessarily preserve program order (the order in which the programmer wrote them down), but each method call does seem to happen instantly (i.e., invocation and response follow each other directly), whilst maintaining the result of each method call individually and consequently the object its state.

Question

According to a paper "On the correctness of TM" by Guerraoui and Kapalka this is the definition of linearizability in context of TM:

.. a safety property devised to describe shared objects, is sometimes used as a correctnes criterian for TM. In the TM terminology linearizability means that intuitively, every transaction should appear as if it took place at some single unique point in time during its lifespan.

This definition just seems to resemble serializability to me. But the paper further on defines serializability as follows:

.. is one of the most commonly required properties of a database transaction. Roughly speaking, a history H of transactions (i.e., the sequence of all operations performed by all transactions in a given execution) is serializable if all committed transactions in H issue the same operations and receive the same responses as in some sequential history S that consists of only the committed transactions in H. (A sequential history is one without concurrency between the transactions).

This definition however seems to imply that one can reorder statements from transactions in such a way that they are interleaved. (I.e. reorder the statements such that not all the statements of a transaction T appear in sequence in H).


I am under the assumption that my personal above definitions are correct. My actual question is how linearizability is defined in the context of transactional memory. It makes no sense to reason about each method call (i.e., read/write operation) in a transaction individually as this would break the semantic of transactional memory. Nor would it make sense to having to reason about two concurrent transactions their interleavings, as this would obviously break serializability. Does linearizability mean that one can reorder individual operations inside a transaction? If linearizability is a stronger form of serializability, it should not matter in which order the operations are executed inside a single transaction.

In short: First of all, is my understanding of serializability and linearizability correct? I am confused after reading a plethora of definitions in different works. And second, what does it mean for a set of transaction to be linearizable?

I have also read the question that was linked inside of the comments. But it did not explain to me my specific question.

Sources

by Christophe De Troyer at April 26, 2015 02:00 AM

StackOverflow

Send a WS request for each URL in a list and map the responses to a new list

I'm developing a REST server in Play with Scala, that at some point needs to request data at one or more other web services. Based on the responses from these services the server must compose a unified result to use later on.

Example:

Event C on www.someplace.com needs to be executed. In order to execute Event C, Event A on www.anotherplace.com and Event B on www.athirdplace.com must also be executed.

Event C has a Seq(www.anotherplace.com, www.athirdplace.com) from which I would like to iterate and send a WS request to each URL respectively to check wether B and C are executed.

It is assumed that a GET to these URLs returns either true or false

How do I collect the responses from each request (preferably combined to a list) and assert that each response is equal to true?

EDIT: An event may contain an arbitrary number of URL's. So I cant know beforehand how many WS requests i need to send.

by Alsaybar at April 26, 2015 01:38 AM

/r/compsci

What is a good alternative to Hadoop for smaller datasets?

I'm currently working on designing an analytics system for a startup. Hadoop seems like the obvious choice, but after researching it I'm not so sure. The files I need to manage will be CSV files that are all about 8MB each. Since the HDFS is supposed to break files into 64MB blocks, it seems vastly overpowered for what I'll be working with. Is Hadoop still a good option, or is there another program/framework that would be more ideal for working with smaller files?

submitted by ChinchillaSanchez
[link] [41 comments]

April 26, 2015 01:36 AM

StackOverflow

Suppressing sbt debug output

How do I suppress SBT's debug messages? They are logged to stdout so running a project produces this:

$ cat src/main/scala/Hello.scala 
object Hello {
  def main(args: Array[String]) {
    Console.println("Hello sbt")
  }
}

$ sbt run > out.txt

$ cat out.txt
[info] Set current project to hello (in build file:/home/synapse/projects/algs2/)
[info] Running Hello 
Hello sbt
[success] Total time: 1 s, completed May 14, 2013 11:39:23 PM

by synapse at April 26, 2015 01:27 AM

Planet Emacsen

Lobsters

Fefe

So ein bisschen schmierig wirkt das ja schon gerade, ...

So ein bisschen schmierig wirkt das ja schon gerade, wie die ganzen pompösen Länder der Türkei ein Eingeständnis abringen wollen, dass sie die Schuld an einem Völkermord tragen.

Ich bin da selber ein bisschen zwiegespalten.

Auf der einen Seite finde ich es eine Schande, dass die Türkei das nicht zugeben will. Das ist ein echt offensichtlicher Fall.

Und wenn wir da eine Stelle hätte, die selber frei von Schuld ist, dann fände ich das gut, wenn die die Türkei kritisieren würden.

Aber wer kritisiert denn hier die Türken? Deutschland kann selber nicht zugeben, dass der Kosovo-Krieg ein völkerrechtswidriger Angriffskrieg war. Die Bundeswehr beschreibt die Rechtslage. Seht ihr da irgendwo eine mögliche Rechtfertigung für Auslandseinsätze der Bundeswehr? Ich auch nicht. Deutschland sollte also mal ganz hinten in der Ecke stehen und die Schnauze halten, wenn es um moralische Fragen und das Völkerrecht geht.

Auf der anderen Seite geht Deutschland vergleichsweise vorbildlich mit der eigenen Vergangenheit um. Ich bin immer noch sehr unzufrieden, aber wenn man mal in andere Länder geht, dann merkt man erst, wie gut wir es haben. Man schaue nur mal, wie die Briten mit ihrer Opium-Vergangenheit und China umgehen. Oder die Amerikaner mit ihren Atombombenabwürfen über Japan.

Daher würde ich fast denken, die anderen Länder sollten auch mal gepflegt die Schnauze halten, wenn es darum geht, anderen Ländern Vorwürfe über deren Geschichtsaufarbeitung zu machen.

Insofern ist meine Position sowas wie: Die Türkei hat hier schon Kritik verdient, aber von den üblichen Verdächtigen ist niemand in einer moralischen Position, die ihm erlauben würde, andere Länder zu kritisieren.

Update: Oh, und wo wir gerade beim Anerkennen von Völkermord waren ... wie wäre denn mal, wenn Deutschland den Völkermord an den Herero und Nama anerkennen würde?

Der Genozid wurde durch die von der Generalversammlung der Vereinten Nationen 1948 beschlossene Konvention über die Verhütung und Bestrafung des Völkermordes als Völkermord anerkannt. Die Bundesregierung nimmt zur Bewertung des Ereignisses unverändert keine Stellung und weist etwaige Verantwortung für einen Völkermord von sich.

Aber die Türken anpupen, ja?

April 26, 2015 01:01 AM

/r/clojure

Does anyone have the video for the figwheel presentation at Clojurewest?

I was at the Bruce Hauman's Developing Clojurescript with Figwheel talk and was hoping to review it/ share it with some colleagues but it doesn't seem to be listed on ClojureTV. Does anyone know if it's available somewhere and if not, are there any plans to eventually publish it?

submitted by Spiph
[link] [2 comments]

April 26, 2015 12:58 AM

StackOverflow

merge to set default values, but potentially expensive functions

An idiomatic way to set default values in clojure is with merge:

;; `merge` can be used to support the setting of default values
(merge {:foo "foo-default" :bar "bar-default"} 
       {:foo "custom-value"})
;;=> {:foo "custom-value" :bar "bar-default"}

In reality however, often the default values are not simple constants but function calls. Obviously, I'd like to avoid calling the function if it's not going to be used.

So far I'm doing something like:

(defn ensure-uuid [msg]
  (if (:uuid msg)
    msg
    (assoc msg :uuid (random-uuid))))

and apply my ensure-* functions like (-> msg ensure-uuid ensure-xyz).

What would be a more idiomatic way to do this? I'm thinking something like:

(merge-macro {:foo {:bar (expensive-func)} :xyz (other-fn)} my-map)

(associf my-map
  [:foo :bar] (expensive-func)
  :xyz (other-fn))

by Vanessa at April 26, 2015 12:52 AM

Java io library: What is the difference between File.toString() and File.getPath()

... since it seems that both returns the same string - take a look at this Scala code:

scala> val f = new File("log.txt")
scala> f.getPath
// res6: String = log
scala> f.toString
// res7: String = log

by lolski at April 26, 2015 12:43 AM

Play Framework how to sort collection in repeat() form helper?

Based on the Play (java) documentation, let's say I have the following example:

public class UserForm {
    public String name;
    public List<MyClass> itmes;
}

and

@helper.inputText(userForm("name"))

@helper.repeat(userForm("items"), min = 1) { itemField =>

    @helper.inputText(itemField)

}

However, in MyClass I have an overridden implementation of compareTo(). I also have a getter getSortedItems() that will return the list in the proper sorted order.

Currently, using the repeat() helper does not get my list of items in the ordering that I want. Is there a way to specify the ordering for the repeat() helper? Or can I give it a List as a parameter? It seems like this would be possible to do in Scala.

Any help would be appreciated, thanks!

by KJ50 at April 26, 2015 12:28 AM

/r/emacs

How to make evil text objects with common text in delimiters?

I'd like to make a LaTeX environment text object for evil. Based on some code I found on StackOverflow, I tried the following:

(defmacro define-and-bind-text-object (key start-regex end-regex) (let ((inner-name (make-symbol "inner-name")) (outer-name (make-symbol "outer-name"))) `(progn (evil-define-text-object ,inner-name (count &optional beg end type) (evil-select-paren ,start-regex ,end-regex beg end type count nil)) (evil-define-text-object ,outer-name (count &optional beg end type) (evil-select-paren ,start-regex ,end-regex beg end type count t)) (define-key evil-inner-text-objects-map ,key (quote ,inner-name)) (define-key evil-outer-text-objects-map ,key (quote ,outer-name))))) (define-and-bind-text-object "e" "\\\\begin{\\(\\w+\\)}" "\\\\end{\\(\\w+\\)}") 

Unfortunately, this code cannot cope with nested environments, like the following:

\begin{foo} \begin{bar} blah blah blah \end{bar} \end{foo} 

What I need to do is make the text in the braces consistent between the beginning and ending delimiter. I tried using regex groups, but that didn't help.

submitted by BLAND_AS_OVALTINE
[link] [comment]

April 26, 2015 12:13 AM

CompsciOverflow

An algorithm to compute a set of states that satisfy a specific CTL formula

Working through a past exam question and I'm unsure where to start or what form they want the answer in:

Define an algorithm that receives as input a finite transition system TS defined over the set of actions {a, b, c} and computes the set of states of TS that satisfy the CTL formula ∃a U b.

Would anybody be able to give me a kick-start or a walk-through on how to answer this? Really appreciate any help.

edit: My attempt based on Klaus' answer

for all executions in TS for all states if b holds add current state to the stack, EXISTS = TRUE else if a holds if current state == next state in execution do nothing else if next state contains a or b add current state to the stack, EXISTS = TRUE else do nothing

by eyes enberg at April 26, 2015 12:07 AM

HN Daily

April 25, 2015

/r/emacs

Whitespace mode

I have a setup for whitespace mode that I find very useful. It highlights chars in lines over 80 characters. Here's the config to get that:

(setq whitespace-line-column 80) (setq whitespace-style '(face lines-tail)) (add-hook 'prog-mode-hook 'whitespace-mode) 

What I'd like is to have a function that makes space and tab characters show up, but I can't manage to get it working. As far as I can tell I need to something like

(defvar whitespace-full-list '(face tabs spaces trailing lines space-before-tab newline indentation empty space-after-tab space-mark tab-mark newline-mark))

(defun better-whitespace () (interactive) (let ((whitespace-small-list '(face lines-tail)) (whitespace-big-list whitespace-full-list)) (if (eq whitespace-style whitespace-small-list) (setq whitespace-style whitespace-small-list) (setq whitespace-style whitespace-small-list)))) ;; just to test (define-key prog-mode-map (kbd "C-c w") 'better-whitespace) 

Has anybody else dealt with something like this?

EDIT:

I have a working version. Here's the code it took, if anybody's interested in something similar. Thanks to /u/lawlist for helping!

(defun better-whitespace () (interactive) (whitespace-mode -1) (let ((ws-small '(face lines-tail)) (ws-big '(face tabs spaces trailing lines-tail space-before-tab newline indentation empty space-after-tab space-mark tab-mark newline-mark))) (if (eq whitespace-style ws-small) (setq whitespace-style ws-big) (setq whitespace-style ws-small))) (whitespace-mode 1)) (define-key prog-mode-map (kbd "C-c w") 'better-whitespace) 
submitted by nautola
[link] [5 comments]

April 25, 2015 11:36 PM

TheoryOverflow

Equivalent NP-Complete problem for this linear program

I have a linear-program that solves for the defender's best response against a follower's mixed-strategy. In lpsolve, the problem is formulated as follows:

max: 1.62 z_i0 + 0.85 z_i1 + 0.25 z_i2 + 0.28 z_i3 - 3;
b + e + f + g + j + n + q + r + x <= 3;
z_i0 - r - f - n - e - j <= 0;
z_i1 - f - q - b <= 0;
z_i2 - x <= 0;
z_i3 - x - g <= 0;

z_i0 >= 0;
z_i0 <= 1;

z_i1 >= 0;
z_i1 <= 1;

z_i2 >= 0;
z_i2 <= 1;

z_i3 >= 0;
z_i3 <= 1;
bin b;
bin e;
bin f;
bin g;
bin j;
bin n;
bin q;
bin r;
bin x;

$z_{i0}$, $z_{i1}$, $z_{i2}$, $z_{i3}$ are variables that can either be 0 or 1. If for the $j^{th}$ follower strategy $z_{ij}$ is 1, it means that the defender has at least one resource assigned to a target that is attacked by the attacker in his $j^{th}$ strategy. $b$, $e$, $f$, ..., $x$ are binary variables that simply say whether the defender has allocated a resource to this target. If the value is 1, it means there is a resource allocated to this target.

The coefficients of each of the $z_{ij}$ variables is just the product of the probability that the attacker plays the $j^{th}$ strategy, and the reward the attacker gets for successfully playing that strategy. The defender gets a negative payoff if the attacker is successful, so the $3$ at the end is just the maximum payoff that the attacker can get, which we subtract from the value of the objective function. The best the defender can do is get a payoff of 0 (we can see that if every $z_{ij}$ is 1, the defender gets the best payoff of 0).

The idea is to figure out which targets the defender can optimally cover with up to $k$ resources. I'm trying to figure out which NP Complete problem I can reduce this to.

by Vivin Paliath at April 25, 2015 11:21 PM

QuantOverflow

Time-independent local volatility

Suppose somebody provides us with a surface of European call prices $C(\tau,K)$ where $\tau$ stands for time-to-maturity and $K$ for the strike. By Dupire's results, there is a unique local volatility function $\sigma(\tau,K)$ that generates these prices, and it can be expressed from them as $$ \sigma(\tau,K) = \frac{2C_\tau}{K^2C_{KK}}, $$ here for simplicity I am assuming that interest rate is zero. Now, if we just have $C(T,K)$ for a single maturity $\tau = T$, is that true that there exists a unique time-independent local volatility $\sigma(K)$ that generates this price at that maturity? In case it does, is there an analytic formula for that function?

by Ulysses at April 25, 2015 11:09 PM

CompsciOverflow

$\mathsf{PP}$ compared to $\mathsf{\#P}$

Since know that $\mathsf{TC^0\subsetneq PP}$, I wonder if we also know that $\mathsf{TC^0\subsetneq\#P}$? I understand that $\mathsf{\#P}$ is in counting hierarchy.

by Turbo at April 25, 2015 11:07 PM

StackOverflow

"object index is not a member of package views.html" when opening scala play project in scala ide

I've created a play project with play 2.3.7

In the project directory, I ran activator and ran the eclipse command to generate eclipse project files.

When I go to eclipse (I'm using the Scala IDE from typesafe Build id: 4.0.0-vfinal-20150119-1023-Typesafe , there is an error in my Application.scala file:

object index is not a member of package views.html

Is there something amiss with my setup? The app runs fine when I execute run at the activation console prompt.

Thanks!

EDIT: Added code

package controllers

import play.api._
import play.api.mvc._

object Application extends Controller {

  def index = Action {
    Ok(views.html.index("Your new application is ready."))
  }

} 

The error is on the 'Ok..' line.

There is a file in views called index.scala.html, and the app runs file when I run it from activator..

by user384842 at April 25, 2015 10:57 PM

Applicative Functor instance for functions with the same result type in Scala

I'm trying to implement the functor and applicative instances for functions with the same initial type in Scala. My current implementation of the two type classes is:

trait Functor[F[_]] {
  def fmap[A,B](f: A => B): F[A] => F[B]
}

trait Applicative[F[_]] extends Functor[F] {
  def pure[A](a: A): F[A]
  def apply[A,B](f: F[A => B]): F[A] => F[B]
}

The problem that I have is that Function1 has two type parameters and I think I cannot fix only one of them. Is it possible to do this in Scala?

by mariop at April 25, 2015 10:56 PM

How to set linux environment variables with ansible

Hi i am trying to find out how to set environment variable with Ansible.

something that a simple shell command like this:

EXPORT LC_ALL=C

tried as shell command and got an error tried using the environment module and nothing happend.

what am i missing

by Gleeb at April 25, 2015 10:32 PM

QuantOverflow

A problem involving random walks from Shreve

Problem 5.4i in Shreve examines a symmetric random walk. Let $\tau_2 $ be the first time that the random walk reaches 2.

For $\alpha\in (0, 1) $, we are given that $$E [\alpha ^ {\tau_2}] =\sum_{k = 1} ^\infty (\alpha/2) ^ {2k}\frac{(2k)!}{(k+1)!k!}$$

It's clear that

$$E [\alpha ^ {\tau_2}] =\sum_{k = 1} ^\infty (\alpha) ^ {2k} P (\tau_2 = 2k) $$

It's therefore tempting to conclude that

$$P (\tau_2 = 2k)=\frac{(2k)!}{(k+1)!k!}2^{-2k}$$

And indeed that is the answer given. But in general $\sum_i f_i g_i =\sum_i f_i h_i$ does not imply that $g_i = h_i$ and so I'm not sure how we can reach this conclusion. (Asked about specific circumstances where this conclusion is true here.) What am I missing?

by Xodarap at April 25, 2015 10:23 PM

/r/systems

TheoryOverflow

Test DataSet Generator for Monotone NAE 3SAT Algorithm

Can someone point to a configurable (var/clause count etc.) difficult problem set generator for testing an 3SAT algorithm that are challenging for current algorithms.. specifically NAE 3SAT data set if possible..

by TheoryQuest1 at April 25, 2015 10:12 PM

StackOverflow

Closing over java.util.concurrent.ConcurrentHashMap inside a Future of Actor's receive method?

I've an actor where I want to store my mutable state inside a map.

Clients can send Get(key:String) and Put(key:String,value:String) messages to this actor.

I'm considering the following options.

  1. Don't use futures inside the Actor's receive method. In this may have a negative impact on both latency as well as throughput in case I've a large number of gets/puts because all operations will be performed in order.
  2. Use java.util.concurrent.ConcurrentHashMap and then invoke the gets and puts inside a Future.

Given that java.util.concurrent.ConcurrentHashMap is thread-safe and providers finer level of granularity, I was wondering if it is still a problem to close over the concurrentHashMap inside a Future created for each put and get.

I'm aware of the fact that it's a really bad idea to close over mutable state inside a Future inside an Actor but I'm still interested to know if in this particular case it is correct or not?

by Soumya Simanta at April 25, 2015 10:10 PM

How to eliminate vars in a Scala class?

I have written the following Scala class based on a corresponding Java class: The result is not good. It still looks Java-like, is replete with vars, is very long, and is not idiomatic Scala in my opinion.

I am looking to shrink this piece of code, eliminate the vars and the @BeanHeader stuff.

Here is my code:

    import scala.collection.immutable.Map 

     class ReplyEmail {

     private val to: List[String] = List()   
     private val toname: List[String] = List()
     private var cc: ArrayList[String] = new ArrayList[String]()

    @BeanProperty
    var from: String = _

    private var fromname: String = _

    private var replyto: String = _

    @BeanProperty
    var subject: String = _

    @BeanProperty
    var text: String = _

    private var contents: Map[String, String] = new scala.collection.immutable.HashMap[String, String]()

    @BeanProperty
    var headers: Map[String, String] = new scala.collection.immutable.HashMap[String, String]()

    def addTo(to: String): ReplyEmail = {
      this.to.add(to)
      this
    }

    def addTo(tos: Array[String]): ReplyEmail = {
      this.to.addAll(Arrays.asList(tos:_*))
      this
    }

    def addTo(to: String, name: String): ReplyEmail = {
      this.addTo(to)
      this.addToName(name)
    }

    def setTo(tos: Array[String]): ReplyEmail = {
      this.to = new ArrayList[String](Arrays.asList(tos:_*))
      this
    }

    def getTos(): Array[String] = {
      this.to.toArray(Array.ofDim[String](this.to.size))
    }

    def getContentIds(): Map[_,_] = this.contents

    def addHeader(key: String, `val`: String): ReplyEmail = {
      this.headers + (key -> `val`)
      this
    }

     def getSMTPAPI(): MyExperimentalApi = new MyExperimentalApi
      }

   }

=---------------------

I appreciate any help in accomplishing this goal. Updated Code

I made some small changes to the code, like introducing an Option[String] instead of a String

case class ReplyEmail(
  to: List[String] = Nil,
  toNames: List[String] = Nil,
  cc: List[String],
  from: String,
  fromName: String,
  replyTo: String,
  subject: String,
  text: String,
  contents: Map[String, String] = Map.empty,
  headers: Map[String, String] = Map.empty) {
  def withTo(to: String): ReplyEmail = copy(to = this.to :+ to)
  def withTo(tos: List[String]): ReplyEmail = copy(to = this.to ++ to)
  def withTo(to: Option[String], name: Option[String]) = copy(to = this.to :+ to, toNames = toNames :+ name)

  def setTo(tos: List[String]): ReplyEmail = copy()
  def withHeader(key: String, value: String) = copy(headers = headers + (key  -> value))
  def smtpAPI = new MyExperimentalApi

}


Now, the only problem I face is in this line: The error is: type mismatch: found: List[java.io.Serializable] required: List[String]

def withTo(to: Option[String], name: Option[String]) = copy(to = this.to :+ to, toNames = toNames :+ name)

by user3825558 at April 25, 2015 10:05 PM

How to define a type that corresponds to Map[String, Option[String]]?

Given the following method that takes a Map[String, Option[String]] parameter:

def myMethod(m: Map[String, Option[String]]) = {
  ...
}

how to define a new type MyMap that implements Map[String, Option[String]] so that the method looks like this:

def myMethod(m: MyMap) = {
  ...
}

by j3d at April 25, 2015 10:02 PM

Why does immutable.ParMap.mapValues return ParMap instead of immutable.ParMap?

In my Scala 2.11.6 application I have defined an immutable.ParMap like so:

  object Foos {
    val foos: immutable.ParMap[String, Foo] = immutable.ParMap(
     ...
    )
  }

Later on, I'd like to create a new immutable.ParMap using the same keys, so I use mapValues:

 val fooServices: immutable.ParMap[String, FooService] = Exchanges.exchanges mapValues (_.fooService)

Scala complains

Expression of type ParMap[String, FooService] does not conform to expected type ParMap[String, FooService].

As a workaround I can use a for comprehension, which the compiler permits:

val fooServices: immutable.ParMap[String, FooService] =
    for ((name, ex) <- Foos.foos) yield name -> foo.fooService

But it would nice to be able to use the library functions that are designed for this exact task. Why can't mapValues infer the most specific type here?

by user1885198 at April 25, 2015 10:01 PM

How does the Cats library in Scala relate to scalaz?

How does the Cats library relate to scalaz? The Cats project mentions it is descended from scalaz.

by user2726995 at April 25, 2015 09:58 PM

CompsciOverflow

Adleman's theorem to $\mathsf{P=BPP}$

Adleman's theorem gives $$\mathsf{BPP\subseteq P/Poly}.$$

Why is this theorem considered progenitor to derandomization conjecture that $\mathsf{P=BPP}$?

Does it mean Adleman's result could be considered as evidence $$\mathsf{BPP\subseteq P/Log}$$ is a realistic possibility?

Anaalogously, does it mean $$\mathsf{NP\subseteq P/Poly}$$ could be considered as evidence $$\mathsf{NP\subseteq P/Log}$$ is a realistic possibility?

Is there a satifactory answer without considering Impagliazzo-Wigderson's conditional $\mathsf{P=BPP}$ result?

by Turbo at April 25, 2015 09:51 PM

Planet Theory

EATCS honours three outstanding PhD theses with the first EATCS Distinguished Dissertation Awards

The EATCS is proud to announce that, after examining the nominations received from our research community,  the EATCS Distinguished Dissertation Award Committee 2015, consisting of Javier Esparza, Fedor Fomin,  Luke Ong and Giuseppe Persiano (chair), has selected the following three theses for the EATCS Distinguished Dissertation Award for 2015:

Each of the awards carries a monetary prize of 1000 Euros, provided by the EATCS. Each of the award-receiving dissertations will be published on line by the EATCS at http://www.eatcs.org/index.php/dissertation-award.

Karl Bringmann's thesis consists of two parts: one dealing with ``Sampling from Discrete Distributions'' and one dealing with ``Computing Fréchet Distances.'' Sampling from a discrete probability distribution is a fundamental and classically studied problem. Bringmann's thesis contributes a deeper understanding of the amount of memory needed for sampling from  a discrete  probability distribution. The provided bound is tight for systematic data structures and  for non-systematic data structures, the thesis shows that, quite surprisingly, with only 1 redundant bit it is possible to reply to queries in expected constant-time. In the second part  of the thesis, Bringmann relates the computational complexity of computing the Frechet distance of two curves (a classical notion from Computational Geometry) with a variant, SETH', of the Strong Exponential Time Hypothesis. Specifically, if SETH' holds, then the Frechet distance of two curves cannot be computed in time strongly subquadratic.

Skrzypczak’s thesis is about the use of descriptive set theory as a framework for investigating the omega-regular tree languages or equivalently the languages defined by formulas of Monadic Second Order logic with several successors. The thesis makes progress on long-standing open problems in the theory of automata: the characterizations of regular languages of infinite trees that are definable in weak monadic second-order logic and the Rabin-Mostowski index problem. For both problems, Skrzypczak's thesis provides solutions for notable special cases.

Wootters' thesis approaches coding-theoretic problems from an analytic point of view, rather than an algebraic point of view and develops new tools for studying codes, makes several contributions and settles a few important open problems. Specifically, Wootters' thesis advances the understanding of two important topics in Coding Theory: List Decoding and Local Seconding. Regarding List Decoding, the thesis shows that random linear codes, over constant-sized alphabets, are optimally list-decodable (this answers a question asked by Elias over twenty years ago) and that there exist Reed-Solomon codes which are list-decodable beyond the Johnson bound (this answers a question asked by Guruswami and Sudan over 15 years ago). Regarding Local Decoding, the thesis gives a family of high-rate codes with local correctability that admits a sublinear-time decoding algorithm.

by Luca Aceto (noreply@blogger.com) at April 25, 2015 09:49 PM

StackOverflow

Executing a function with a timeout

What would be an idiomatic way of executing a function within a time limit? Something like,

(with-timeout 5000
 (do-somthing))

Unless do-something returns within 5000 throw an exception or return nil.

EDIT: before someone points it out there is,

clojure (with-timeout ... macro)

but with that the future keeps executing that does not work in my case.

by Hamza Yerlikaya at April 25, 2015 09:42 PM

Convert map of vectors to vectors of columns in Clojure

Clojure newbie. Apologies if this has been answered, but searches have not turned up exactly what I need, and I am finding Clojure pretty overwhelming. :(

I have a collection (or list or sequence or vector) of maps like so:

{ :name "Bob", :data [32 11 180] }
{ :name "Joe", :data [ 4  8  30] }
{ :name "Sue", :data [10  9  40] }

I want to create new vectors containing the data in the vector "columns" associated with keys that describe the data, like so:

{ :ages [32 4 10], :shoe-sizes [11 8 9], :weights [180 30 40] }

Actually, a simple list of vectors might be adequate, i.e.:

[32 4 10] [11 8 9] [180 30 40]  

If it's better/easier to make the original list into a vector, that's fine; whatever's simplest.

by Mr. Fussyfont at April 25, 2015 09:12 PM

CompsciOverflow

The set of all vertices, such that each vertex in the set has a path to exactly $k$ vertices

I need to find algorithms for both undirected and directed graphs, with no assumption on them being connected. Also the algorithms must be $O(V+E)$, where the undirected one should not depend on $k$.

I have ideas for both, i'm just not very sure about the directed version, because I'm not sure on how to approach the correctness proof.

For the directed version, I thought about finding all the SCC's with two DFS runs, and then running a topological sort on the graph created by representatives from each SCC. Afterwards, i'll flip the edges and start a DFS from the end vertex. For every SCC node I visit i'll increment a counter by the number of nodes in the SCC I visited. If while vising a SCC node the counter shows $k+1$ nodes, all the nodes in the SCC will be added to the set. At this point the algorithm will go back as though we reached a leaf.

For the undirected version, I can just do run a BFS/DFS for every disconnected component to find all the connected components. If a connected component has exactly k+1 nodes, then all the nodes in that component belong to the set.

Any help regarding the directed version will be great, thanks!

by Xsy at April 25, 2015 09:06 PM

type theory notation troubles

I'm working through "Types and Programming Languages" by Benjamin Pierce and I don't quite understand the notation. Particularly on Page 106, (chapter 9 Simply Typed Lambda-Calculus) there is a lemma (9.3.7):

$$ \text{If } \Gamma \vdash t:T \text{ and } x \notin dom(\Gamma) \text{, then } \Gamma, x:S \vdash t:T. $$

I understand the idea roughly but I can't quite read this statement out in english. How do I translate the turnstile symbol (\vdash in tex) in this context? In the book I've seen $$ \vdash t:T $$ translated as t is a closed, well-typed term but I don't quite see how to apply that in this context. Looking around apparently the turnstile symbol is also used to mean provable.

Gamma is the typing context and provides binding of variables and their types. t:T means the term t is has the type T. The function dom(gamma) is the set of variables bound by Gamma.

So my best guess for translating this is as follows:

If the term t having type T is a closed, well-typed term under the typing context gamma and x is not in the set of bound variables of gamma, then the term t having type T is still a closed, well-typed term in the typing context gamma after binding the term x having type S.

Is this roughly right? It makes sense but I still feel a bit shaky on the translation. Anyone have any other suggestions, corrections, or comments? Thanks

by Hath995 at April 25, 2015 09:04 PM

Are LALR tables equal to SLR tables if the grammar is SLR modulo precedence/associativity of operators?

Consider a grammar G which is SLR(1) except for some shift/reduce conflicts which can be resolved by imposing some precedence or associativity of operators.

Is it the case that the construction of the SLR(1) parser and the LALR(1) always yield the same table?

If not, what happens if the grammar G is SLR(1)?


I'm particularly interested in this question about the following grammar:

\begin{align*} C &\to E \, \text{rel} \, E \mid C \& C \mid ! \, C \mid \text{pred}\, ( \,Es\, ) \\ E &\to \text{num} \mid \text{id} \mid E\, +\, E \mid E\, *\, E \mid - \,E \mid ( \,E \,) \\ Es &\to E \mid E\, ,\, Es \end{align*}

Using a custom SLR and LALR table generator I obtain exactly the same table for the above grammar, with some shift/reduce conflicts that can be resolved by giving precedences in the order &, !, rel, +, *, - and left associativity to &, + and *. I'm in doubt whether my implementation is incorrect or the tables are really the same, and if this is the case whether there is some general rule that applies in certain circumstances.

by Bakuriu at April 25, 2015 08:51 PM

Planet Clojure

Pantomime 2.6.0 is Released

TL;DR

Pantomime is a Clojure interface to Apache Tika. 2.6.0 is a minor release that upgrades Tika.

Apache Tika 1.8

Apache Tika dependency has been upgraded to version 1.8.

Change Log

Pantomime change log is available on GitHub.

Pantomime is a ClojureWerkz Project

Pantomime is part of the group of libraries known as ClojureWerkz, together with

  • Langohr, a Clojure client for RabbitMQ that embraces the AMQP 0.9.1 model
  • Elastisch, a minimalistic Clojure client for ElasticSearch
  • Cassaforte, a Clojure Cassandra client built around CQL
  • Monger, a Clojure MongoDB client for a more civilized age
  • Welle, a Riak client with batteries included
  • Neocons, a client for the Neo4J REST API
  • Quartzite, a powerful scheduling library

and several others. If you like Pantomime, you may also like our other projects.

Let us know what you think on Twitter or on the Clojure mailing list.

Michael on behalf of the ClojureWerkz Team

by The ClojureWerkz Team at April 25, 2015 08:45 PM

/r/netsec

Planet Clojure

Quieter clojure.test Output

If you use clojure.test then there is a good chance you’ve been annoyed by all the output when you run your tests in the terminal. When there is a test failure you have to scroll through pages of output to find the error.

With release 0.9.0 of lein-test-refresh you can minimize the output of clojure.test and only see failure and summary messages. To enable this feature add :quiet true to the :test-refresh configuration map in either your project.clj or profiles.clj file. If you configure lein-test-refresh in ~/.lein/profiles.clj then turning on this feature looks like the following. 1

1
2
{:user {:plugins [[com.jakemccrary/lein-test-refresh "0.9.0"]]
        :test-refresh {:quiet true}}}

Setting up your profiles.clj like above allows you to move to Clojure project in your terminal, run lein test-refresh, and have your clojure.tests run whenever a file changes. In addition, your terminal won’t show the usual Testing a.namespace output.

Below is what you typically see when running clojure.test tests in a terminal. I had to cut most of the Testing a.namespace messages from the picture.

Normal view of test output

The following picture is with quiet mode turned on in lein-test-refresh. No more Testing a.namespace messages! No more scrolling through all your namespaces to find the failure!

Minimal output in console

I just released this feature so i haven’t had a chance to use it too much. I imagine it may evolve to change the output more.


  1. More configuration options can be found here

by Jake McCrary at April 25, 2015 08:40 PM

QuantOverflow

ex ante tracking error correlation between funds

I have two portfolio's called Comb & Global. They both have the same investable universe lets says 3000 stocks & are measured against the same benchmark. So it is possible that both funds hold the same stocks. I would like to examine the correlation of the ex-ante between the two funds.

I know I can calculate the ex-ante tracking error as below,

te = sqrt((port_wgt - bm_wgt)' * cov_matrix * (port_wgt - bm_wgt))

I also know the correlation is calculated by

 p = cov(x,y) / stdev(x) * stdev(y)

I was wondering the best way to calculate the ex ante correlation between the two funds? Is there a relationship between the two funds weights that I can make use of?

Update

I should have mentioned that the two portfolios are sub portfolios and are combined into one portfolio. So I wanted to see the correlation of the ex ante tracking error between the two sub portfolio's.

I realised I can do the following,

port_wgts - number_of_companies x 2 matrix
cov_matrix - number_of_companies x number_of_companies matrix

so the below line will return a 2x2 covariance matrix.

port_wgts' * cov_matrix * prt_wgts

So I have the variances of both sub portfolios - taking the square root of this gives me the tracking error for both.

Convert the 2 X 2 covariance matrix to a correlation matrix by the following

  D = Diag(cov_matrix)^(1/2)
  corr_matrix = D^-1 * cov_matrix * D^-1

So I now have the correlation between the two sub portfolios just using the weights.

by mHelpMe at April 25, 2015 08:09 PM

/r/netsec

DataTau

Lobsters

Overcoming Bias

Financial Status

At a finance conference last year, I learned this: Instead of saving money directly for their own retirement, many workers have their employers save for them. Those employers hire in-house specialists to pick which specialty consulting firms to hire. These consulting firms advise employers on which investment firms to use. And those investment firms pick actual productive enterprises in which to invest. All three of these intermediaries, i.e., employer, consultant, and investor, take a cut for their active management.

Even employees who invest for themselves tend to pick at least one high fee intermediary: an active-management investment firm. Few take the low cost option of just directly investing in a low-overhead index fund, as recommended by academics for a half-century.

I’ve given talks at many active-management investment firms over the years. They pay speakers very well. I’ve noticed that (like management consults) they tend to hire very visibly impressive people. They also give big investors a lot of personal quality time, to create personal relationships. Their top people seem better at making investors like them than at picking investments. One math-focused firm said it didn’t want more investors because investors all demand more face time and influence over investment choices.

Since 1880 the fraction of US GDP paid for financial intermediation has gone from 2% to 8%. And:

The unit cost [relative to asset income] of financial intermediation appears to be as high today as it was around 1900. This is puzzling. Advances in information technology (IT) should lower the physical transaction costs of buying, pooling and holding financial assets. Trading costs have indeed decreased, but trading volumes have increased even more, and active fund management is expensive. … Investors spend 0.67% of asset value trying (in vain on average, by definition) to beat the market. … While mutual funds fees have dropped, high fee alternative asset managers have gained market share. The end result is that asset management unit costs have remained roughly constant. The comparison with retail and wholesale trade is instructive. In these sectors … larger IT investment coincides with lower prices and lower (nominal) GDP shares. In finance, however, exactly the opposite happens. … A potential explanation is oligopolistic competition but … the historical evidence does not seem to support the naive market power explanation, however. (more)

Our standard academic story on finance is that it buys risk-reduction, and perhaps also that we are overconfident in finance judgements. But it isn’t clear we’ve had much net risk reduction, especially to explain a four times spending increase. (In fact, some argue plausibly that those who take more risk don’t actually get higher returns.) On overconfidence, why would it induce such indirection, and why would its effects increase by such a huge factor over time?

Finance seems to me to be another area, like medicine, schools, and many others, where our usual standard stories just don’t work very well at explaining the details. In such cases most economists just gullibly plow ahead trying to force-fit the standard story onto available data, instead of considering substantially different hypotheses. Me, I try to collect as many pieces of related puzzling data as I can, and then ask what simple but different stories might account at once for many of those puzzles.

To me an obvious explanation to consider here is that we like to buy special connections to prestigious advisors. We look good when bonded to others who look good, and we treat investor relations as especially important bonds. We seem to get blamed less for failures via prestigious associates, and yet are credited for most of our success via them. Finally, we just seem to directly like prestigious associations, even when others don’t know of them. And we may also gain from associating with others who share our advisors.

To explain the change in finance over time, I’ll try my usual go-to explanation for long-term changes in the last few centuries: increasing wealth. In particular, social bonds as a luxury that we buy more of when richer. This can explain the big increases we’ve seen in leisure, product variety, medicine, and schooling.

So as we get rich, we spend larger fractions of our time socializing, we pay more for products with identities that can tie us to particular others, we spend more to assure associates that we care their health, and we spend more to visibly connect with prestigious associates. Some of those prestigious associates are at the schools we attend, the places we live, and via the products we buy. Others come via our financial intermediaries.

This hypothesis suggests an ironic reversal: While we usually play up how much we care about associates, and play down our monetary motives, in finance we pretend to make finance choices purely to get money, while in fact we lose money to gain prestigious associates.

by Robin Hanson at April 25, 2015 07:45 PM

StackOverflow

Is it possible to showcase the different strategies of evaluation by modifying this simple reducer?

I am the kind that prefers learning by looking at code instead of reading long explanations. This might be one of the reasons I dislike long academic papers. Code is unambiguous, compact, noise-free and if you don't get something you can just play with it - no need to ask the author.

This is a complete definition of the Lambda Calculus:

-- A Lambda Calculus term is a function, an application or a variable.
data Term = Lam Term | App Term Term | Var Int deriving (Show,Eq,Ord)

-- Reduces lambda term to its normal form.
reduce :: Term -> Term
reduce (Var index)      = Var index
reduce (Lam body)       = Lam (reduce body)
reduce (App left right) = case reduce left of
    Lam body  -> reduce (substitute (reduce right) body)
    otherwise -> App (reduce left) (reduce right)

-- Replaces bound variables of `target` by `term` and adjusts bruijn indices.
-- Don't mind those variables, they just keep track of the bruijn indices.
substitute :: Term -> Term -> Term
substitute term target = go term True 0 (-1) target where
    go t s d w (App a b)             = App (go t s d w a) (go t s d w b)
    go t s d w (Lam a)               = Lam (go t s (d+1) w a) 
    go t s d w (Var a) | s && a == d = go (Var 0) False (-1) d t 
    go t s d w (Var a) | otherwise   = Var (a + (if a > d then w else 0))

-- If the evaluator is correct, this test should print the church number #4.
main = do
    let two = (Lam (Lam (App (Var 1) (App (Var 1) (Var 0)))))
    print $ reduce (App two two)

In my opinion, the "reduce" function above says much more about the Lambda Calculus than pages of explanations and I wish I could just look at it when I started learning. You can also see it implements a very strict evaluation strategy that goes even inside abstractions. On that spirit, how could that code be modified in order to illustrate the many different evaluation strategies that the LC can have (call-by-name, lazy evaluation, call-by-value, call-by-sharing, partial evaluation and so on)?

by Viclib at April 25, 2015 07:43 PM

TheoryOverflow

Adleman's theorem to $\mathsf{P=BPP}$

Adleman's theorem gives $$\mathsf{BPP\subseteq P/Poly}.$$

Why is this theorem considered progenitor to derandomization conjecture that $\mathsf{P=BPP}$?

Why does Adleman's result imply $$\mathsf{BPP\subseteq P/Log}$$ is a realistic possibility?

Does it mean $$\mathsf{NP\subseteq P/Poly\implies NP\subseteq P/Log}$$ is a realistic possibility?

by Turbo at April 25, 2015 07:43 PM

What would signify hierarchy collapse to first level?

We know that $$\mathsf{NP\subseteq P/Poly \iff coNP\subseteq P/Poly\implies PH=\Sigma_2^P}=\mathsf{\Pi_2^P}$$ $$\mathsf{NP\subseteq P/Log\iff coNP\subseteq P/Log\implies PH=\Sigma_0^P=\Pi_0^P}$$

Is there a circuit result that would imply $$\mathsf{PH=\Sigma_1^P=\Pi_1^P}$$ but still $$\mathsf{PH\neq\Sigma_0^P=\Pi_0^P}$$ is a possibility?

Is there a possible similar collapse result from non-circuit containment?

by Turbo at April 25, 2015 07:36 PM

Lobsters

CompsciOverflow

/r/emacs

(delete-selection-mode) doesn't work...

Hi!

I just installed Emacs for Mac OS, and installed the configuration files as per this tutorial: http://www.braveclojure.com/basic-emacs/#1__Installation . But for some reason I can't delete regions, even though the tutorial says it should work. I'm very new to emacs so nothing makes much sense: even when I manually activate (delete-selection-mode) I can't delete regions (ctrl-space, select region, backspace). Am I doing something wrong?

submitted by JacquesDegree
[link] [6 comments]

April 25, 2015 07:21 PM

TheoryOverflow

How can a top-down parser detect ungrammaticality an input string [on hold]

What would be the sketch from recovering from the parsing errors .

Will backtracking be used for this?

Thanks

by bob tutpo at April 25, 2015 06:56 PM

Why do top-down parsers have difficulty with ambiguous grammars [on hold]

I know there is a recent enough method to make it possible for top-down parsers to parse an ambiguous grammar ,

But why is it so difficult for a top down parser to parse an ambiguous grammar .

Im very new to the field of compilers so a simple and concise explanation would work well with me .

Thanks

by bob tutpo at April 25, 2015 06:47 PM

StackOverflow

Ocaml List: Implement append and map functions

I'm currently trying to extend a friend's OCaml program. It's a huge collection of functions needed for some data analysis.. Since I'm not really an OCaml crack I'm currently stuck on a (for me) strange List implementation:

type 'a cell = Nil
    | Cons of ('a * 'a llist)
and 'a llist = (unit -> 'a cell);;

I've figured out that this implements some sort of "lazy" list, but I have absolutely no idea how it really works. I need to implement an Append and a Map Function based on the above type. Has anybody got an idea how to do that?

Any help would really be appreciated!

by Chris at April 25, 2015 06:07 PM

Planet Theory

Girls Who Code (Newton) Visit to Harvard

My friend David Miller is looking for instructors to help out with the Newton Girls who Code club.  Here's an announcement, please connect with him if you're interested. 

They visited Harvard last week -- David gave me permission to post his description of the visit.  It seemed like a well-organized event -- thanks to the Harvard Women in Computer Science group for putting it together.

----

Last Friday, the Newton Girls Who Code club was welcomed by the Harvard Women in Computer Science at Harvard's Maxwell-Dworkin Laboratory. The students learned about the joint Harvard-MIT Robot Soccer team from the mechanics tech lead, Harvard junior Kate Donahue. She showed us last years fleet of robots (named after Greek and Roman gods), and described their work on preparing this years fleet (to be named after Harry Potter characters). Kate emphasized the interplay between computer vision, artificial intelligence, mechanical engineering, and distributed systems. Many of the robot parts are 3D printed -- a technology that the Newton GWC students will become more familiar with this fall as we integrate the Newton Library's 3D printer into the GWC activities.
 
After the robots demonstration, the students took part in a Q+A discussion with Harvard WiCS undergrads Hana Kim, Ann Hwang, and Adesola Sanusi. Our students asked great questions about our hosts' motivation and history with coding, the mechanics of being a college CS major, the role of gender in the CS community, the connections between computer science and other fields, and our hosts' vision for the future of computing. The WiCS undergrads are excellent role models and were enormously encouraging. They pronounced our students more than ready to take Harvard's most popular course, Introduction to Computer Science, and recommended they try out the free, online, EdX version today. It was an exhilarating afternoon!

by Michael Mitzenmacher (noreply@blogger.com) at April 25, 2015 06:07 PM

/r/compsci

Having difficulties with some computer architecture questions.

I have this problem:

A benchmark program makes 12000 memory references. The program accesses cache and main memory. The average memory access time is 28ns while the cache and main access times are 12 ns and 48ns respectively. How many of the benchmark references are in main memory ?

It seems like simultaneous equations could be used ie x+y=12000 and (12x+48y)/12000=28 where x is number of times cache is accessed and y is number of times main memory is accessed. But this method doesn't give whole numbers for x and y. Can anybody help me with this? :)

Furthermore, what is a typical size for a cache block today? I've found conflicting answers online.

submitted by sippd
[link] [1 comment]

April 25, 2015 06:02 PM

Lobsters

What Comes After MVC

Hey folks, here’s the talk I gave two days ago on improving OO design in Rails apps, mostly applying taken from functional programming in a way Ruby/Rails devs don’t see as weird and unidiomatic. Rolling this around in my head is why I keep posting so many stories about FP, dynamic languages, OO design, and the intersection of same.

A silly trailer for the talk and signup for ongoing info + slides with speaker notes is here: https://push.cx/2015/railsconf

Comments

by pushcx at April 25, 2015 05:57 PM

CompsciOverflow

Finding a loop invariant?

What are some of the efficient ways to find a loop invariant for a certain code or a Hoare triple? For example, I managed to find some invariants for simpler examples, but I'm struggling with one that I found. It's a Hoare triple and goes like:

{ 0<N ∧ (∃k : 0≤k<N : a[k]=0) } 

i := 0 ; x := false ; 
while i < N do {x := x ∨ (a[i]=0) ; i++} 

{ x } 

My guess is that the invariant is: i < N, but I feel that I'm missing something. Maybe I could express this as i ≥ 0, because i starts from 0 and is later only greater than it.

by Boris Jakovljević at April 25, 2015 05:47 PM

TheoryOverflow

What is the relationship between $\mathsf{APX}$ and $\mathsf{MaxSNP}$ classes?

My understanding of these classes is a really fuzzy. The more I am trying to read the more I am getting confused. Can anyone help me understand the relationship between these classes. More precisely, is $\mathsf{MaxSNP} = \mathsf{APX}$ or $\mathsf{MaxSNP} \subset\mathsf{APX}$?

Thanks a lot in advance. :)

by user1105 at April 25, 2015 05:38 PM

Lobsters

/r/emacs

"msw-theme" - what do you do for writing articles?

Hi all,

It's really quite silly, but despite the infinite power of org-mode, I keep going back to Microsoft Word to write documents. It just somehow looks infinitely better than using a fixed-width programming font to write articles.

I've been using the following code snippet to make get a more Word-like org experience. It's been working well, although it's pretty primitive.

;; Toggle Microsoft Word-like writing settings (defun toggle-msw-theme () (interactive) (if (equal custom-enabled-themes '(tango-dark)) (progn (load-theme 'leuven) (set-face-attribute 'default nil :font "Calibri" :height 120 :foreground "#FFFFF")) (progn (load-theme 'tango-dark) (set-face-attribute 'default nil :font "DejaVu Sans Mono" :height 120)))) 

Would anyone like to share what they do in lieu of Word? Thanks for your thoughts!

submitted by defenestre
[link] [5 comments]

April 25, 2015 05:33 PM

Lobsters

StackOverflow

Shapeless - turn a case class into another with fields in different order

I'm thinking of doing something similar to Safely copying fields between case classes of different types but with reordered fields, i.e.

case class A(foo: Int, bar: Int)
case class B(bar: Int, foo: Int)

And I'd like to have something to turn a A(3, 4) into a B(4, 3) - shapeless' LabelledGeneric comes to mind, however

LabelledGeneric[B].from(LabelledGeneric[A].to(A(12, 13)))

results in

<console>:15: error: type mismatch;
 found   : shapeless.::[shapeless.record.FieldType[shapeless.tag.@@[Symbol,String("foo")],Int],shapeless.::[shapeless.record.FieldType[shapeless.tag.@@[Symbol,String("bar")],Int],shapeless.HNil]]
    (which expands to)  shapeless.::[Int with shapeless.record.KeyTag[Symbol with shapeless.tag.Tagged[String("foo")],Int],shapeless.::[Int with shapeless.record.KeyTag[Symbol with shapeless.tag.Tagged[String("bar")],Int],shapeless.HNil]]
 required: shapeless.::[shapeless.record.FieldType[shapeless.tag.@@[Symbol,String("bar")],Int],shapeless.::[shapeless.record.FieldType[shapeless.tag.@@[Symbol,String("foo")],Int],shapeless.HNil]]
    (which expands to)  shapeless.::[Int with shapeless.record.KeyTag[Symbol with shapeless.tag.Tagged[String("bar")],Int],shapeless.::[Int with shapeless.record.KeyTag[Symbol with shapeless.tag.Tagged[String("foo")],Int],shapeless.HNil]]
              LabelledGeneric[B].from(LabelledGeneric[A].to(A(12, 13)))
                                                           ^

How do I reorder the fields in the record (?) so this can work with a minimum of boilerplate?

by Utaal at April 25, 2015 05:26 PM

Lobsters

CompsciOverflow

Is there a complexity viewpoint of Galois' theorem?

  • Galois's theorem effectively says that one cannot express the roots of a polynomial of degree >= 5 using rational functions of coefficients and radicals - can't this be read to be saying that given a polynomial there is no deterministic algorithm to find the roots?

  • Now consider a decision question of the form, "Given a real rooted polynomial $p$ and a number k is the third and the fourth highest root of $p$ at least at a gap of k?"

A proof certificate for this decision question will just be the set of roots of this polynomial and that is short certificate and hence it looks like $NP$ BUT isn't Galois theorem saying that there does not exist any deterministic algorithm to find a certificate for this decision question? (and this property if true rules out any algorithm to decide the answer to this question)

So in what complexity class does this decision question lie in?


All NP-complete questions I have seen always have a trivial exponential time algorithm available to solve them. I don't know if this is expected to be a property which should always be true for all NP-complete questions. For this decision question this doesn't seem to be true.

by user6818 at April 25, 2015 05:08 PM

TheoryOverflow

Complexity analysis on a parameterized recurrence relation

In order to analyse the complexity of our algorithm, we try to solve this recurrence:

$T(n)=3T(n-1)-T(n-2)+T(n-k)+3^k$ ; in which $k$ is a parameter to be fixed.

We know that this kind of recurrence means $T(n)=O(\alpha^n)$, where $\alpha$ is the zero root of the equation $f(x)=x^n-3x^{n-1}+x^{n-2}-x^{n-k}-3^k$.

The term $3^k$ expects a small value of $k$, while the rest terms want a big value of $k$, therefore we believe that there should be a best choice of $k$.

So the question is how to choose $k$, $(k=g(n))$ in order to minimize $\alpha$?

Thank you in advance for any idea.

UPDATE 1:

  1. The recurrence has been corrected, and yes, $\alpha$ should be between 2 and 3.

  2. Please note that $k$ is a dynamic value, which changes during the recursion. So it's better to consider the recurrence as $T(n)=3T(n-1)-T(n-2)+T(n-g(n))+3^{g(n)}$

UPDATE 2:

To somewhat simply the problem, we may consider $g(n)=\beta n$ and try to find the $\beta$ ?

by Leo at April 25, 2015 05:06 PM

DataTau

Planet Emacsen

emacspeak: Emacspeak 3.0: Released 20 Years Ago Today!

twenty-years-after

1 Emacspeak Was Released Twenty Years Ago Today

The more things change, the more they remain the same.

Emacspeak was released 20 years ago on April 25, 1995 with this announcement. The Emacspeak mailing list itself did not exist in its present form — note that the original announcement talks about a mailing list at DEC CRL. When Greg started the mailing list at Vassar, we seeded the list from some/all of the messages from the archive for the mailing list at DEC.e

by T. V. Raman (noreply@blogger.com) at April 25, 2015 04:49 PM

/r/clojure

Senior Clojurists: Tell Anything! (Aka: What do you wish you had known earlier)

In the spirit of the "New Clojurists: Ask Anything" thread:

Seasoned Clojurists: Tell us what you wish somebody had told you earlier. Tell us tips, tricks or things people should focus on.

submitted by MyNameIsFuchs
[link] [13 comments]

April 25, 2015 04:49 PM

StackOverflow

Changes to existing routes and views in scala play app not taking effect

If I try to make any changes to existing views or routes in my play app, I'm not able to see those changes but if I copy over my changed view file to a new file and use that in my play controller, I'm able to see those changes. I've tried compiling and running the app through the command line with activator and also through the IDEA ide. I've tried Invalidating my IDE cache and re running the app with no luck.

To make it clearer, I created a new file root.scala.html with the contents of modified index.scala.html view and modified my index controller action to

  def index = Action {
    Ok(views.html.root.render())
  }

This works(changes made to index.scala.html get reflected).

by Paul Thomas at April 25, 2015 04:48 PM

Why does sbt give "can't expand macros compiled by previous versions of Scala" for Def.inputTask macro in Scala 2.11.1?

I'm using Scala 2.11.1 and sbt 0.13.5.

I have an sbt plugin that contains a helper function to create input tasks as follows (the implementation is stripped away as it's irrelevant to the problem):

def register(name: String, description: String): Def.Setting[InputTask[Unit]] = {
    InputKey[Unit](name, description) <<= Def.inputTask { 
        println("test")
    }
}

This function compiles and works just fine in Scala 2.10.4, however once I switch to 2.11.1 it fails with the following error:

can't expand macros compiled by previous versions of Scala

Is the Def.inputTask macro simply broken in Scala 2.11.1, or am I missing some glaring detail?

Right now the above function is residing in the simplest sbt plugin imaginable. There are no dependencies at all, either.

by m-z at April 25, 2015 04:47 PM

/r/emacs

Using and navigating frames

Hi, Sorry if this is a repeat or a stupid question.

Currently I'm going back and forth between two frame setups quite often. They're in the form:

1.

 +------------------+ | | | | | | | | | LaTeX | | | | | | | | | +------------------+ 

2.

 +------------------+ | | | | | Python code | - | | | | +------------------+ | | | Python shell | | | +------------------+ 

I've been doing it manually now but it is a bit tedious to go past all of the message and log buffers, then split or unsplit a frame. Is there a way to have organized frames inside emacs? I'd prefer that to having two emacs instances running since Alt-tabbing will make me cycle almost as much as using C-x >.

submitted by elkano1003
[link] [8 comments]

April 25, 2015 04:45 PM